Mainframe Pricing to Cloud

In these days of hybrid cloud working with some applications working best in a cloud environment and some that work best on the mainframe, there’s going to come a time when mainframers and cloud people are going to be sitting in the same room and the way computing is paid for will be discussed.

The cloud guys will say pricing on the cloud is straightforward and you only pay for what you use. They might also say that moving to the cloud means there will be a cost reduction because less physical space is needed, less hardware is required, there’s a reduction in costly high-end software, and fewer support personnel are needed. They will also say that there’s a wide variety of access to the latest technology, plus a common framework across the application infrastructure. In addition, ‘shelfware’ can be avoided because you use only what you need. They may also suggest that there are advantages in terms of security. For example, infrastructure must comply with industry standards, and data is less prone to employee theft. In terms of reliability, there’s built-in redundancy, and most providers guarantee 99.99% uptime. Plus, cloud technical skills are plentiful making staff cheaper.

The mainframe people might suggest that with the cloud there can be cost creep in terms of computing resources and departmental use/abuse. There is very likely to be downtime resulting from communication failure and Internet drops. There are security issues. It does not guard against weak digital security methods. Perhaps most significantly, there are performance issues when it comes to traditional batch workloads and complex transactions if they are migrated to the cloud. There’s also an issue with cloud vendor lock because vendors highly encourage their own products, making it difficult to move off any particular platform, once you are there. 

But, the real issue comes when mainframers try to explain to the cloud guys how charging on the mainframe works. Let’s have a look at the kinds of things they need to explain.

Firstly, there is MIPS (Millions of Instructions Per Second), which, historically, was used as a measure of the number of instructions that could be processed in a second of computing time, and was used to measure general computing capacity. 

Then there are MSUs (Million Service Units), which are a measurement of the amount of processing work that can be performed in an hour. One ‘service unit’ originally related to an actual hardware performance measurement, but that is no longer the case. A service unit is an imprecise measurement, and 1 MSU is approximately 8.5 MIPS. IBM publishes MSU ratings for every mainframe model. 

Another problem for cloud people is the difference between licensing and pricing. If you execute IBM zSeries software on a CPC (Central Processor Complex), you must have a license to do so. A license for an IBM monthly license charged product is specific to the product and a CPC with a particular serial number. The license is sold in terms of MSUs. If you are executing a product on a 1500 MSU CPC, you must have a 1500 MSU license, specifying the serial number of that CPC. The price for a product, ie how much you pay IBM each month, depends on the pricing metric that is used for that product.

Once they’ve grasped that, you can move on to Monthly License Charge (MLC) products, which include z/OS, Db2, CICS, IMS, MQSeries, and COBOL. The pricing and terms and conditions for MLC products are based on the pricing metric chosen. Pricing metrics can roughly be grouped into two categories: full capacity and sub-capacity.

Under a full capacity-based metric, all software charges are determined by the capacity of the CPC in which the product runs. Parallel Sysplex License Charges (PSLC) and zSeries Entry License Charges (zELC) are examples of full capacity-based metrics.

Under a sub-capacity metric, software charges for certain products are based on the utilization capacity of the LPARs in which the product runs. Workload License Charges (WLC) and Entry Workload License Charges (EWLC) are examples of sub-capacity-capable pricing metrics. 

IBM recently introduced Tailored Fit Pricing (TFT), which is designed to be more like cloud pricing. It’s based on the overall hardware consumption, not LPAR rolling four-hour average (R4HA). A baseline of overall MSU consumption for the year is taken along with an agreement to consume more MSUs in the upcoming years. A discounted rate for MSUs is applied, and monthly billing is predictable and the same regardless of monthly variations in usage.

The charges for products that use sub-capacity pricing are based on how much the LPARs in which the products run utilize system resources, rather than on the full capacity of the CPC. Users can purchase hardware capacity for future needs without incurring an immediate increase in their software bill. If the usage decreases when business is slow, the software bill decreases with it. If usage is seasonal, the monthly software bills are lower during periods of lower usage. Users pay for capacity on a rolling four-hour average, not on the maximum capacity reached. 

The rolling four-hour average represents the average consumption (in MSU) of the LPAR during the last 4 hours. Every 5 minutes, IMSU (Instantaneous consumption of MSU for the LPAR) is measured. The R4HA is an average of the past 48 IMSU metrics. This is not product usage-based pricing; instead, it is a bit of a hybrid.

Defined Capacity (DC) is set in the Hardware Management Console (HMC) and is used to control billing. It does not enforce capping and is not mandatory. The z/OS Workload Manager (WLM) calculates and monitors the R4HA. If DC is set to a non-zero number, WLM monitors the R4HA and ensures that the R4HA is less than or equal to the DC. Of course, WLM also manages your workloads based on settings/goals. The Processor Resource/Systems Manager (PR/SM) enforces the soft cap when WLM determines it is needed. 

The Sub-Capacity Reporting Tool (SCRT), a no-charge IBM tool, reports required license capacity for sub-capacity-eligible products. The SCRT indicates the required license capacity (in MSUs) of each sub-capacity-eligible product. The SCRT cross-references LPAR utilization and product execution by LPAR to determine the maximum concurrent LPAR four-hour rolling average utilization – the highest combined utilization of LPARs where each product executes during the reporting period. Sub-capacity products are charged based on the rolling 4-hour average utilization of the LPARs in which the sub-capacity products execute. The sub-capacity report determines the required license capacity by examining for each hour in the reporting period: the four-hour rolling average utilization, by LPAR; and which eligible products were active in each LPAR. 

Each hour, the R4HA is compared to the defined capacity (DC). If the DC is set, the SCRT uses the lower of the two values for the utilization value for the z/OS system for that hour. 

The SCRT uses SMF 70-1 and SMF 89-1 / 89-2 records. The SCRT is computed on a monthly basis, from the second day of the month at 0h00 to the first day at 24h00 of the following month. R4HA averages are calculated for each MLC product, each hour, for each LPAR, and for the month. 

The soft capping rule is that when the R4HA becomes greater than or equal to the DC, the LPAR is capped. That means that the IMSU consumption will not be able to exceed the DC anymore until the R4HA becomes lower than the DC. 

Let’s see how much of that the cloud guys understand the first time you tell them!

Regular Planet Mainframe Blog Contributor
Trevor Eddolls is CEO at iTech-Ed Ltd, and an IBM Champion since 2009. He is probably best known for chairing the Virtual IMS, Virtual CICS, and Virtual Db2 user groups, and is featured in many blogs. He has been editorial director for the Arcati Mainframe Yearbook for many years.