Part 3: The pitfalls of MIPS
This is the third in a series of blogs about how to follow up on your mainframe outsourcer. The first blog was about why it is important to follow up. The second blog introduced the two metrics that are generally used as the basis for mainframe capacity billing: MIPS and MSU. In this article we will focus on the pitfalls of calculating MIPS.
As discussed in the last blog, MIPS is a way of normalizing CPU usage across different CPU models and hardware configurations. That makes MIPS useful for billing purposes because the same workload will consume about the same number of MIPS, and therefore cost the customer the same, as the outsourcer upgrades to new machines or changes the configuration.
On the surface, MIPS calculation is very simple – you measure the number of seconds the CPU is busy on a certain workload and then multiply by a configuration-dependent MIPS factor. This is like calculating a volume of water consumed by counting the number of bottles and multiplying that by the capacity of the bottles. The MIPS factor corresponds to the size of the bottles.
CPU seconds are measured by the operating system. The CPU seconds used by an LPAR, for example, can be found in the SMF 70 record and CPU seconds for a job or address space in the SMF 30 record.
Determining the configuration dependent MIPS factor is a two-step process:
- Determine the “effective” machine model
- Look up the MIPS rate for that model in a table
Let’s look at each of these steps.
Determining the effective machine model
Newer IBM mainframe models deliver more MIPS per CPU second than older ones. Basically, new models are faster and more efficient than the older models. Faster means more cycles per second, though newer models don’t always have faster cycle times. More efficient means fewer cycles per instruction, for example because a better cache structure means you waste less time waiting on data. The combination of more cycles per second and fewer cycles per instruction means more instructions per second. In other words, newer models get more useful work done per second, reflected in a higher MIPS factor.
While newer machines have a higher MIPS rate than older ones, larger machines have a lower MIPS rate than smaller ones. The more processors that are active on the machine, the more time is spent managing the work between these processors. This is referred to as the MP (multiprocessing) overhead and means that you get less useful work done per processor as you add more processors to the machine.
The chart below shows how the MIPS rate declines as you add processors to a Z13:
Looking up the MIPS rate
Once you have determined the ‘effective’ model, then you can look up the MIPS rate for that model in a standard MIPS table. There are several sources for MIPS tables, but those produced by Watson Walker and IBM are generally considered the industry standard. These tables give a MIPS factor for each machine model (e.g. z13) and configuration (e.g. 33 processors). One other dimension is included in the tables – Relative Nest Intensity (RNI). RNI reflects the fact that some workloads (like batch) run more efficiently while other workloads (like online) run less efficiently. ‘Average’ RNI is generally used for billing purposes. In a later blog we will go into more depth on RNI.
Here is an example of a portion of a MIPS factor table (thanks to Watson Walker).
For billing purposes, MIPS are normally calculated as an average over one hour:
MIPS = (cpu seconds per hour / 3600) * MIPS per processor
For example, if we use 1000 cpu seconds in an hour on a z13 with 33 processors (model 2964-733) we get:
(1000 cpu seconds / 3600) * 1070.9 = 297.5 MIPS
Here are some of the pitfalls we have seen over the years reviewing various outsourcers’ MIPS calculations:
- Calculating the MIPS as if the LPAR were a stand-alone machine. This gives an artificially high MIPS rate – meaning higher cost to you. For example, if you have an LPAR with 5 logical processors (LCPs) running on the configuration above, this method would give a MIPS rate of 1479 rather than 1071. That equates to a 40% higher cost.
- Including only GCP’s and ignoring zIIP’s, in determining the ‘effective’ model. For example, a z13 with 28 GCP’s and 5 zIIPS should be considered a 33-processor machine (1071 MIPS) and not a 28-processor machine (1102 MPS). There is however some debate about whether an extra zIIP gives the same additional overhead as an extra GCP. Many contracts pre-date the advent of zIIPs and are silent on this issue.
- MIPS tables based on the outsourcer’s own measurements and methodologies. The MIPS tables should be from an independent source, otherwise the methodology for determining the MIPS rate must be clearly understood and agreed. This will generally involve a cumbersome process for benchmarking MIPS rates for new computer models. This can be avoided if standard tables are used. Furthermore, using an outsourcer’s own table makes it more difficult to compare prices with other outsourcers.
- Many other ‘creative’ models are used to determine the MIPS factor. One example of this is taking two or more methodologies and averaging the result.
- Many contracts are silent on how the MIPS factor is determined. As mentioned above, this is like buying water by the bottle without specifying the size of the bottles.
- The actual billing is based on a different calculation than the one set out in the contract. This happens when the technical people providing input to the billing haven’t seen or haven’t understood what is in the contract. We have seen cases where this has gone unnoticed for years and where customer was overcharged by hundreds of thousands of Euro every year.
In the end, the most important thing is that the choice of MIPS table and the methodology for choosing the ‘effective’ computer model are well understood between the parties and clearly documented in the contract. Using an industry standard method and MIPS table makes it easier to benchmark prices against other outsourcers. Our recommendation is to determine the ‘effective’ model by including all GCP and zIIP processors that are not dedicated or offline and using either Watson Walker or IBM’s MIPS tables.
But we are not finished with MIPS based billing just by agreeing on how to compute MIPS. There are a lot of interesting commercial factors around the price per MIPS that we will dig into in the next blog.
Originally published on SMT Data.
Steven Thomas is the CTO and COO at SMT Data – the Specialists in IT Business Intelligence. Steven holds a Master’s degree in Computer Science from Stanford and brings almost 30 years of technical and business experience from Saxo Bank, Fidelity Information Services and IBM.