Mainframes are expensive – no-one disagrees with that. But what they can do is run industrial strength workloads for less than it would cost on other platforms with much less likelihood of any kind of system malfunction. Having said that, it makes sense to understand mainframe performance and ways that those costs could be reduced.

Let’s start by looking at performance and performance management. Here are some of the metrics used by performance-monitoring tools (also see Does Your Mainframe Need an Oil Change):

  • Average Throughput – the average number of service completions per unit time, e.g., the number of transactions per second or minute. Transactional workloads like CICS typically use this metric.
  • Average Response Time – the average amount time it takes to complete a single service. Again, this used to measure transactional workloads. In addition, it can be used to set SLA goals for workloads.
  • Resource Utilization – the length of time a resource was busy. Resources that can be measured include CPU utilization, processor storage utilization, I/O rates, and paging rates. Values show how much time a workload, batch or transaction, spends on resources over a period of time.
  • Resource Velocity – a measure of resource (e.g., CPU) contention. While one workload is using a resource, other workloads are in the waiting queue. Resource velocity is the ratio of time taken for using the resource (A) against that total time spent using the resource (A) plus the total time spent waiting in the queue (B), i.e., A / (A+B). Zero percent indicates lots of contention for a resource, while a 100 means there’s no contention. This is a useful metric for SLA goals.
  • Performance Index – used to determine how workloads are performing with respect to their defined goals. It’s a ratio of defined goals to achieved goals, where a value of 1 means workloads are meeting their goals, greater than 1 means they’re exceeding their goals, less than one means that workloads are missing goals.

Understanding these help understand the current performance of workloads on the mainframe. But what can you do to cut mainframe costs?

The first option that many sites are using is to reduce software costs by using IBM’s Variable Workload License Charge (VWLC programs). The second common option – which also comes with its own cost – is to make use of zIIPs or other specialty processors.

And with the IBM Z (announced last summer), IBM announced container pricing models for qualified solutions. A container can be any address space, or group of address spaces supporting a particular workload. Container pricing options are meant to give organizations the predictability and transparency they require for their business. And these pricing models are scalable within and across LPARs. Currently, there are three container pricing workloads:

  • Application Development and Test Solution
  • New Application Solution
  • Payments Pricing Solution:

As yet though, we don’t know how competitively priced these will be.

Currently, the biggest benefit can come from – dare I say it – rewriting code. Most mainframe sites are using COBOL programs that may well have started life many years ago and still been retained because they work. No-one has looked at how efficiently (or not) they use mainframe resources. A lot of the original code was written by great coders, but much was written by people with less expertise with the language or by coders in a hurry. The result is that performance could be improved by rewriting the code. Of course, they big hurdle is that the source code may no longer exist, and, where it does, it probably lacks any useful comments.

In these days of the API economy, it makes sense to rewrite frequently used pieces of code because they can be re-used – making it a very good investment of time and effort. IT can also then be used to connect to new applications running on mobile devices or the Internet of Things, and generally modernize the way things are done. In addition, particularly poorly performing parts of the code need to be written so that those parts use fewer resources in order to achieve the same results. Apart from the obvious advantages in terms of performance, updating code can also make it easier to maintain when future development is required. And it also gives an excellent opportunity to ensure that there are no security exposures in the code.

To get the biggest bang for your buck, it makes sense to start rewriting the code that’s going to see the biggest improvement in performance. And, although there is a cost involved, it makes the code legacy that’s being passed on to new younger mainframers will be better placed for future development and change in the world of business and mainframes compared to competitor organizations that will have to make changes to their original COBOL and Assembler code from back in the 1970s.

Rewriting old code is something that companies are going to have to start addressing, one way or another, to keep their organization in business and their mainframe flexible enough for future requirements. And how will they find budget to fund this? The answer comes from the savings they will make in performance from the first applications they rewrite.

Regular Planet Mainframe Blog Contributor
Trevor Eddolls is CEO at iTech-Ed Ltd, and an IBM Champion since 2009. He is probably best known for chairing the Virtual IMS, Virtual CICS, and Virtual Db2 user groups, and is featured in many blogs. He has been editorial director for the Arcati Mainframe Yearbook for many years.

One thought on “Getting the basics right”

Leave a Reply

Your email address will not be published. Required fields are marked *