Do you think the mainframe is dead? A dinosaur, an ancient piece of technology that has outlived its usefulness like cathode ray tubes, rotary dial telephones, and dial-up modems?
I have some news for you. The mainframe has adapted: it is alive and well with major banks and financial institutions continually growing their use of mainframes. In fact, mainframes are at the core of their technology strategies. The reason why? Mainframes deliver functionality and reliability unmatched by any other platform.
Why major banks and financial institutions grow mainframe use
Here are five reasons why major banks and financial institutions continue to grow their mainframe use:
- Delivers continual uptime
Imagine your bank having regular outages? With a mainframe, banks generally do not have outages—planned or unplanned. Take the example of several German banks using the mainframe continually for over six years—and no system downtime. What other platform could you use that is so reliable and that has an architecture you can conduct full hardware and software upgrades without any downtime. And, of course, for a bank’s customers it means fast, reliable and secure service. - Leverages new assets to meet growing mobile needs
Mainframe systems can accommodate and adapt to meet the world’s steadily increasing financial transaction workloads. Pretty amazing when you consider the mainframe was designed and implemented fifty years before the web, mobile devices, and devices that can be connected to the internet (IoT). The mainframe meets the needs of customers who require instant access to financial data from anywhere, at anytime. Not only is the mainframe alive, but also it is thriving in today’s mobile and cloud-based computing environments. - Supplies superior processing power suited to large-scale banking and finance IT
Banks and financial services organizations deal with huge amounts of data—which is one of the reasons why they continue to grow their use of mainframe computing. Few systems of record can match the mainframe platform when it comes to transaction throughput and when it comes to tackling big jobs such as handling millions of credit card transactions. Today, the mainframe is still the go-to IT platform for large-scale transaction processing. - Accesses and analyzes time-critical data
Nearly every major organization worldwide uses mainframe databases to support mission-critical applications. Contrary to what a lot of newer Big Data vendors will tell you, the best place to perform business analytics on mainframe transactional data is on the same platform where the data resides—on the mainframe. For that reason, many organization in many industries—banking, healthcare, insurance, retail, manufacturing—have been running analytics on the mainframe for years. Analytics are run today on a variety of operating systems on the mainframe platform. - Integrates data for multiple lines-of-business
Some people may not realize that the mainframe runs many different operating systems—like IBM’s own z/OS, but also Linux, UNIX and Windows, among others. This makes the mainframe, especially with its reliability, robustness and security, ideal for large organizations that must manage disparate data in different formats, using different databases, used by different lines-of-business. There are applications available to manage all this—from IBM, as well as a number of third-party ISVs—a whole community providing additional mainframe capability all the time.
One reason why the mainframe’s age gives it a bad rep
The mainframe has a bad rep because it is old and likened to the extinct Dinosaur. The truth is that what we think of as “older” is not old at all. In fact, the newest mainframe system—the IBM z13s—was released in 2015.
Much of the bias against mainframe technology centers on the erroneous idea that mainframes are excessively expensive and that its days are somehow numbered. For large businesses, mainframe technology is actually very cost-effective, and the basic mainframe metric, MIPS, continues to rise across its user base. Further, mainframe systems can be optimized and operating costs can be dramatically reduced for a fraction of the cost of mega rip-and-replace migration projects. This applies both to those organizations running modern mainframe applications that are under constant development and maintenance, and those organizations running legacy mainframe applications.
Consider these facts:
- Mainframes account for 68% of IT production workloads, but only 6% of IT spend. (Source: Solitaire Interglobal)
- Over the past five years, costs at server-intensive IT shops have gone up 65% more than those of mainframe-intensive IT shops. (Source: Rubin Worldwide)
- Mainframe-intensive companies earn 28% more per dollar of IT infrastructure than server-intensive companies. (Source: Rubin Worldwide)
- Between 2005 and 2014, the ratio of mainframe MIPS to mainframe full-time equivalent employees has grown 351%. As a matter of historical significance, never in the history of IT have so many owed so much to so few, as when it comes to dedicated, long-serving mainframe employees. (Source: Gartner)
Maturing with age
Despite recurring predictions of its imminent demise, which are surprising considering its trajectory, the mainframe continues to adapt and thrive. Consider its adaptability in today’s mobile and cloud computing environment where vast amounts of computing power and storage are required.
Bill Olders is a Technology Visionary and an experienced Board Director, CEO and Senior Executive with a successful track record in both turnaround and growth situations within the technology and training industries. His specialties include business and technology solutions, rules-based applications development, enterprise-level IT problem solving, and much more.
Interesting point of view. Are there any numbers supporting the headline?
It’s not super-clear that the benefits mentioned are not true for other approaches, especially as the weakest link is not the compute h/w, it’s the network routing and the software.
Is there no impact on m/f thinking arising from Brewer’s Theorem: species-scale systems carefully segment the different parts of the application according to CAP criteria in the face of low-ish quality networks and often favour Availability over Consitency (e.g. when shopping on-line, the browsing and ordering demand high-availability to close the sale, payments demand Consistency and cannot rely on a model such as blockchain to avoid a single service instance.