The IBM 360 mainframe made its debut in April 1964. With the unprecedented ability to perform 229,000 calculations per second, it became the enterprise platform for transaction-heavy, critical applications. With the emergence in the mid-1980s of mid-range systems (such as SUN’s RISC processor) as well as PCs networked with servers, many thought the mainframe would go the way of the dinosaur. However, in the early 1990s IBM introduced the System/390 family with a groundbreaking computing capacity of 1,000 MIPS (million instructions per second). Since then mainframe computing capacity has increased by more than 30% year-over-year.
After more than 50 years, mainframes still run the core business functions of 70% of the Fortune 500, including 96% of the world’s top 100 banks, and 90% of the largest insurance organizations. However, with the recent shift to cloud computing, the future of mainframes has once again been called into question. This blog explores why and how mainframes will continue to play an important role in a distributed and cloud-based world.
The Undisputed Advantages of Mainframes
The mainframe is built for complex, high-volume, data-intensive workloads. It is essentially a collection of specialized components, each of which supplies its own resources—controllers, power supplies, self-diagnostics, cooling, and so on. These in-box components communicate with each other efficiently via backplanes and specialized high-speed channels. Logical partitions (LPARs) each run their own OS, with flexible sharing of memory and CPU resources, and support for clustering.
Today’s mainframes support all of the latest computing technology trends. Virtualization can be implemented either via z/VM (IBM’s hypervisor technology) or KVM, a Linux-native hypervisor. As of August 2017, Docker Enterprise Edition 17.06 supports containerized apps on mainframes running Linux, with no need to modify code. Launched in January 2015 after five years of development, IBM’s z13 Systems mainframe was specifically designed to handle billions of mobile transactions from a wide range of mobile devices. It also supports big data analytics and cloud interfaces. In July 2017, the z14 Systems mainframe appeared on the market, with all of the features introduced by the z13 plus new encryption technology based on keys locked into cryptographic hardware, as well as enhanced support for machine learning analytics. With a 10-core CPU chip, the z14 is also IBM’s fastest ever mainframe.
It should not be surprising, therefore, that in a recent global survey of CIOs, 88% expect their mainframes to continue to be a key business asset over the next decade, and 81% reported that their mainframes continue to evolve—running new and different workloads.
Mainframes vs. the Public Cloud
So mainframes are here to stay. But in today’s complex, hybrid IT environments, mainframe managers need to understand the potential advantages that cloud computing offers—whether private cloud or public, or some combination of the two. In order to set the stage for a discussion of the role of mainframes in a cloud-based world, we have prepared a table that summarizes the pros and cons of mainframe vs. public cloud computing.
|Compute & Storage Resources
|Single infrastructure that delivers massive compute & storage resources as needed
|On-demand access to compute & storage resources with virtually limitless scale.
|If already owned no need to purchase new infrastructure, since mainframes are designed to last for decades. Otherwise, an entry-level mainframe starts at ~$40,000, with big systems costing $1,000,000+.
|No up-front infrastructure cost, but containing public cloud usage costs is a major challenge for many enterprises.
|A single machine to maintain
|Non-existent (for infrastructure)
|High, with enhanced encryption support; sole responsibility of enterprise.
|High, with shared responsibility (provided for data at-rest, enterprise for data in-transit).
|Multiple OS’s supported, such as z/OS, Linux, Docker containers.
Sometimes challenging to develop and deliver apps on mainframes.
|Can spin up many different types of software environments, including any type of host OS, virtual servers, containers.
|Mainframe personnel are retiring and not being replaced fast enough by a new generation.
|Relatively easy to find IT and development personnel with good cloud skills and experience.
The Continuing Role of Mainframes in a Cloud-based, Distributed World
Large enterprises that process high volumes of data and transactions will not abandon their mainframes in the foreseeable future, especially in highly regulated industries such as banking, insurance, and health care. Even if they wanted to move their systems-of-record to the cloud, the costs and risks of doing so are very high. Experience has shown that these migrations are time- and resource-consuming, and often result in systems that are more complex, harder to maintain, and with lower performance than the original.
The smart CIO, however, is pushing her team to identify workloads currently running on mainframes that can be migrated at low risk to a private and/or public cloud where they can benefit from cloud economics. Here are some potential candidates:
- Data archiving, long-term data retention: All the major public cloud providers tier their data storage fees according to how quickly and frequently the data needs to be accessed. The lower the SLA, the lower the fee. Thus, archiving permanently or temporarily cold data in the cloud can be very cost-effective and very low-maintenance compared to long-term storage of the data on on-premise resources—be they mainframe or a distributed environment.
- Data immutability: Stored data can be deleted or corrupted through human error, application bugs, or malicious acts such as ransomware. In highly regulated industries, or whenever a data set is of high value, such loss or corruption can cause inestimable damage. All of the major public cloud providers offer immutable data storage buckets, which simply return an error message if anyone tries to delete or modify the contents.
- Disaster recovery: One of the key requirements of a robust DR plan is the ability to recover from anywhere. In order to effectively meet this need, it is not unusual for redundant DR data sets to be maintained in different geographic locations. DR that is cloud-based means that a single data set is accessible from anywhere, facilitating faster recovery as well as saving storage costs.
- Backup and business continuity: The flexibility and cost-effectiveness of backing up data in the cloud is well-documented. The cloud makes it feasible to keep more backup copies or recovery points than would normally be done in a mainframe or distributed on-premise environment. As a result, enterprises can meet their optimal recovery point objectives.
- Data analytics: In a survey conducted by Syncsort, 60% of the respondents indicated that they plan to move mainframe data to cloud-oriented platforms like Splunk and Hadoop for big data analytics. However, a very large proportion of those respondents still wanted access to log, SMF, or other mainframe data in order to maintain a high correlation with that distributed data. This is a particularly good example of how to build hybrid solutions that make the best of both computing frameworks.
A Final Note
Today’s CIOs do not have to choose between mainframe and cloud—they can choose which IT functions to deploy where, for better ROI and enhanced business outcomes. There are platforms emerging to help mainframe managers make the best of both worlds, such as:
- IBM Cloud’s Rocket Mainframe Data Service, which uses data virtualization technology to allow developers to easily incorporate data from z Systems into cloud or mobile applications.
- Syncsort’s Ironstream that makes it less complex and less costly to bring operational and security log data from the mainframe into Splunk Enterprise and Splunk Cloud.
- Model 9’s z/OS Backup, Archive and Recovery software , which provides a single dashboard for managing an enterprise’s backup and data archival policies—seamlessly storing and recovering mainframe data directly to and from any storage system, whether on-premise or cloud. Model 9 is the only platform that supports full mainframe system recovery from cloud-based data.
This is all good news for those large enterprises that are unlikely to forego the mighty mainframe as their platform of choice for running business- and mission-critical applications.
With over two decades of hands-on experience in enterprise computing, data centers management, mainframe system programming and storage development, I’m now on a mission to accelerate cloud adoption at large enterprises by making their most trusted core business platforms more flexible, affordable and cloud compatible.
Connect with Gil on LinkedIn.