Mainframe Batch

The ancient datacenter game of tug-of-war between the need for more computing power, and the need to more closely control ongoing IT costs is never-ending, and continues, as intensely as ever, in today’s large business datacenters.

And with the mainframe enclosed firmly within its no-holds-barred silo, the game is more dramatic than it has ever been.  With IT folks outside the silo reminding everyone that ongoing mainframe costs are a drag on the new agile development efforts, those IT folks within the silo remind others that the mainframe is responsible for 75% or more of business revenue, and that access from outside the silo is driving up their operational costs.  We’ve all heard these arguments, and to be fair, they’re all true.

The mainframe within the silo needs more $$ to handle growing workloads, while business managers outside the silo demand that those costs within be reduced.  The truth is that you need to pay attention to both performance and cost – you cannot spend with impunity to amp up performance as much as you possibly can – any more than you can throttle down performance just to say that you saved a bundle of cash.  So, what’s the answer??

State of the art for improving batch performance

Slow running batch can be caused by many things, including: waiting for dataset recalls, poor utility choice, waiting on CICS, waiting on datasets, waiting on initiators (legacy), waiting on tape drives and poor dataset buffering performance – this is covered nicely in an excellent article by David Stephens at Longpela: “Why Your Batch is Running Slow”.

The IBM Redbook, Batch Modernization on z/OS , although 6 years old now, remains the defacto standard for optimizing batch processing using everything under the IBM sun to help improve batch performance.  It includes new batch functionality (as of 2012), agile batch, BatchPipes, WebSphere XD, middleware, and much more.

But most mainframe shops have figured these things out by now (if not, you have your work cut out for you). However, even after following all of the best practices, and crossing all the t’s and dotting all the i’s, batch performance and processing costs are still a drag on datacenter resources. This is actually fairly common in high intensity transaction processing environments – like banking, financial services and insurance.

The silver bullet?

For these mainframe shops, the silver bullet just might be one or more unique mainframe optimization technologies. Something that is actually being used right now all over the US and Europe on the highest performing mainframe systems in the world.

In-memory technology allows you to run some batch jobs many times faster, without costing any more on operational cost than you spend right now. Using current assets – that means no changes to your database or to your program logic – batch processing can use much less CPU, and run much faster. Reference data that is accessed many times by each transaction can be accessed instead via high-performance in-memory tables. Only the very small percentage of reference data – that data that is accessed the most – is copied into memory, and accessed from there, where it can be accessed 100 times faster than optimized buffered data.

Automated soft capping can be used to bring down WLC costs – Mission-critical processing (like your batch jobs) don’t have to be capped at all – only lower-priority work experiences performance capping – and only during the peak R4HA period. This technology relies on rapidly changing LPAR capacity settings – a low-risk automated process – and the sharing of MSU resources between LPARs on the fly. Implementation typically saves 7-10% on operations costs over the year. This technology can make a big difference on new workloads, as well as legacy workloads that keep creeping up in cost.

IT business intelligence can provide cost-saving insight into your batch systems that you have never had before. By using your own IT data – mainframe and distributed systems Big Data – and combining it with internal ownership and cost information, you can gain transparency into which business units or applications are using which IT resources, and at what cost. You can even determine how well external computing service providers are controlling costs for you. IT business intelligence can help reduce IT capacity-related costs by 15 – 20%.

Improving Db2 SQL quality through automation can make a significant difference in both the performance and cost of your current batch applications. Automated QC during development can prevent bad SQL code from getting off the developer’s desktop, or leaving the development and test environments. It can also discover long-standing inefficiencies in your production environment that your monitoring tools won’t find. This type of automation can also reduce the quality risk in outsourced dev environments, and deliver 10% cost reduction in CPU resource usage.

Delivering data off platform for batch processing can be a solution for some.  Probably not the solution for your business critical batch processing – it’s already running on the best possible platform for that.  However, if you have secondary or tertiary batch runs that conflict with business-critical batch runs, they can be ported to other platforms for processing, ensuring that they do not affect main-line processing. The technology used is near-real time, high-performance, multi-platform , multi-database replication.  Using changed data capture, it has very low impact on network data traffic.  A side benefit is that the off-loaded batch processing resource usage occurs on a less expensive platform.

Optimization for your mainframe batch

To optimize your mainframe batch performance, you need to follow the well-known best practices that are out there, as discussed by IBM and most of the large and small mainframe IT consultant experts. If you still need to get more out of your mainframe, or if you need to further cut back on operational costs without hampering performance, you owe it to yourself to look at one or more of these best-kept-secret solutions.  They’re mature solutions that all do one thing well: they work.

In addition to the planning, development and management of the Planet Mainframe blog, Keith is a marketing copywriting consultant where he provides messaging for corporate and partner products and solutions.

Leave a Reply

Your email address will not be published. Required fields are marked *