MIPS, or Million Instructions Per Second, is a performance measurement for assessing the computing power of a computer system. It measures the rate at which the system can execute instructions, or the throughput of the mainframe. Generally, a higher MIPS rating indicates a faster system.
MIPS evolved to become the standard metric for billing, with a cost per MIP. In 2015, a study in Science of Computing cited the average cost per MIP was $3,285. A later AWS analysis suggested that the average annual cost per installed MIPS for a large mainframe was about $1,600.
More recently, IBM introduced Million Service Units (MSU) to measure the amount of processing work a computer can perform in one hour. And while a MIP will always be a MIP, most mainframes are now charged on throughput – with different measures for consumption or usage.
What Drives MIPS
MIPS usage is directly driven by CPU usage, and CPU usage depends primarily on applications’ code. Consider a car – if you drive more, it costs you more in gas. And any adjustments to the vehicle — changing the air pressure, switching tires, storing sandbags in the trunk for winter — each one makes the car run differently.
The same is true with servers. The more stuff running on the hardware, the more it uses the CPU and the more it costs.
Regardless of terminology or unit of measure, there is a direct correlation between using more MIPS and driving up costs. As companies face increasing pressure to reduce costs and improve efficiency, the mainframe and its high operating costs face scrutiny.
Reducing costs stemming from MIPS isn’t straightforward. Fortunately, some approaches and tools make it make it easier for organizations to optimize infrastructure networks and shift workloads.
1. Containerization
Containerization refers to the process of encapsulating mainframe applications and their dependencies into a single isolated and portable container that can run in an external environment. It leverages virtualization technology to create an abstraction layer between the application and the underlying hardware, allowing the mainframe applications to run consistently across different platforms.
Advantages to containerizing mainframe applications include:
- Simplifying the application deployment process.
- Faster testing, development, and deployment cycles.
- Improving flexibility, scalability, and use of legacy systems.
- Running mainframe applications in a more lightweight and portable way.
- Reducing the resources required to run the application.
Meanwhile, MIPS decrease because containers move certain applications from mainframe systems to more modern and efficient external environments, such as cloud or distributed systems. By shifting some of these workloads to containerized environments, organizations can reduce the workload on mainframes. This reduces MIPS and saves server workload for more critical applications.
Containerization can also enable more efficient use of mainframe resources by allowing containers to be scaled up or down as needed, based on demand, thereby reducing MIPS consumption and costs.
I expect that moving forward, containerization will provide a new level of options and flexibility for hosting workloads.
2. Data Caching
With the proper tools, mainframe data caching can reduce workload and improve the performance of systems. Caching achieves this by temporarily storing frequently accessed data in high-speed cache memory. It not only reduces the time it takes to retrieve data from the mainframe but also reduces workload and MIPS.
Caching algorithms are used to determine which data should be stored in the cache and when it should be evicted to make room for new data. The cache is typically located close to the mainframe, which allows for faster access times and reduced latency.
Caching mainframe data is especially impactful in large-scale environments where multiple applications can access the same data simultaneously. The key to optimizing mainframe data caching is selectively pulling data that should be stored in the cache and setting appropriate expiration policies in a very granular way.
Scenario: Insurance Company
Imagine an insurance company that has an underwriting engine and claims adjudication application, both calling for a customer address located on the mainframe. By pulling the customer data from the cache, the two applications not only reduce calls to the mainframe but are likely to see lower latency from the external cache than the mainframe.
3. API Modernization
Here’s a thought – what happens if I took the processing of an API and moved it to a non-mainframe platform? One, I could get all the benefits of modernizing my legacy application without incurring additional fees, as I would be executing a mainframe program. Which I’d do anyway.
And two, with the API processing happening off the mainframe, all the data conversions and XML translation happens off the mainframe too. I could complete those necessary steps on a non-mainframe platform so that I wouldn’t get charged.
Plus, I’d gain value as I could consume the mainframe assets and data, connect them to modern applications, and create new opportunities. This would allow more people access to the data, make decisions on the data, and can share the data. Good news – this solution already exists.
Managing Your Mainframe Workload
Managing mainframe capacity can be costly and complex. There is no right answer or single approach to reducing MIPS or avoiding cost increases. If you’d like to gain more control over your MIPS, contact Adaptigent’s modernization experts for a no-obligation conversation. Or visit the website at www.adaptigent.com
Adaptigent enables companies to connect disparate systems and modernize legacy platforms for better business outcomes.