In a growing economy, increases in expected business workloads are making their impacts felt in today’s busiest data centers, through increased online transactions, web requests, mobile requests, batch jobs, ad hoc queries, data warehousing analysis, and more. Meanwhile, disruptive technologies like mobile, big data, business analytics, cloud (computing and storage), digital payment, the algorithmic economy and the Internet of Things (IoT), are also making their impacts on increasingly pressured mainframe data center systems.
And that impact is often perceived decrease in system performance – slower running applications, less responsive databases and a general erosion of computing response times, as mainframe systems cope with the enormous increases in workload demands being piled upon them. And now they really need to run faster.
IBM to the rescue
Fortunately, there are several ways to get those mainframe systems to run faster – one way is you make your database buffers more efficient, and IBM has you covered there with an excellent redbook, “IBM DB2 11 for z/OS Buffer Pool Monitoring and Tuning”
According to IBM, buffer pools are still a key resource for ensuring good performance. This has become increasingly important as the difference between processor speed and disk response time for a random access I/O widens in each new generation of processor. An IBM System z® processor can be configured with large amounts of storage – which if used wisely, can help compensate by using storage to avoid synchronous I/O.
This is accurate, and most mainframe shops are doing a decent job of this – the redbook is full of proven techniques that are well known in the mainframe world. If you’re not on top of this, you need to be – there are plenty of great consulting firms out there (including IBM 🙂 that can help you with this.
Another way is to find ways to make your batch applications run faster – and you may accomplish that with better database buffer management. But to optimize your batch processing, the first stop again, is the IBM redbook dedicated to batch, the “Approaches to Optimize Batch Processing on z/OS.” Now this redbook was last revised in 2012, and it is just a bit dated, but it is still your best starting point. Next is yet another IBM redbook – aimed at organization running mainframe system in ultra high intensity transaction processing environments – “Optimizing System z Batch Applications by Exploiting Parallelism.”
These are the basic toolsets available straight from IBM that you can use to speed up your mainframe processing. Everyone should be doing as much of this as possible, and for the most part, this is being done. If you haven’t delved deeply into these assets, you really need to.
Third-party help
As workloads grow, you’re going to need more performance, and as luck would have it, there are other buffer- and batch-specific performance solutions out there as well – CA’s Hyper-Buf, BMC’s Mainview Batch Optimizer, Rocket Software’s Performance Essential, and many more. These are all great tools – and most of the big mainframe data centers already have at least one of them. Each will have its fan-boys as well as detractors – you’re comparing apples to apples – use the one that works best for you, or for your consultants. But they’ll only take you so far.
Then there are your consultants – from the IBM’s and the Compuware’s out there, but also folks like Longpela, EPS Pivitor and many more. Like the toolsets mentioned above, consulting firms will have their successes and less-than-successes, and most mainframe shops will employ them regularly (or occasionally). Many of these consulting firms can definitely make a difference. In the end, the good ones will deliver the excellent solutions that you need – but again, you’re comparing apples to apples. And they too, can only take you so far.
We’re talking about the law of diminishing returns – you can tune and optimize the heck out of your buffers and your batch processing (and you need to keep doing that), but eventually you’ll be taking baby steps. And taking baby steps when you need to be able to shift into the next gear can be both frustrating and self-defeating.
That extra gear
What is the next gear that can take you beyond where the best buffer and batch optimizations tools and the best performance consultants can generally take you? Well, that is in-memory technology. While the toolsets and the consultants we mentioned above focus for the most part on I/O reduction, in-memory technology eliminates it – now you’re comparing apples to oranges. And this is an orange that you need – if you’re struggling to squeeze more performance out of your mainframe systems.
In-memory technology eliminates not only I/O but also the database overhead – for circumstances where your applications need the most often accessed RO data – allowing it to be accessed 30-100 times faster. If your applications are accessing some of your RO data hundreds or thousands of times per second, that can add up to some significant access time savings, and can improve application performance – often dramatically.
Implementation is very straight forward – small amounts of RO data are copied into high-performance in-memory tables – applications then access the data using a small, efficient API. All other data access (99% +) is unchanged. This technology is being used by many of the largest US banks and credit card companies right now to boost the performance of their mission-critical mainframe applications.
In-memory technology augments your database, and requires no changes to application logic.
From here…
IBM provides the ground work for solid buffer performance. Additionally, there are several third-party buffer managers and batch managers out there that do a decent job of eliminating I/O from mainframe processing in high-intensity transaction environments. And don’t forget about your friendly neighborhood IT performance consultants; they will help you tune your buffers to optimum performance. But don’t compare apples to oranges – in-memory technology gives you the extra gear that you need for superior application performance – a level of performance that all of the buffer tweaking and tuning in the world could never deliver.
In addition to the planning, development and management of the Planet Mainframe blog, Keith is a marketing copywriting consultant where he provides messaging for corporate and partner products and solutions.