While it may be difficult to credit the sincerity of someone who self-critiques by saying, “I care too much,” there are times when I apply the sentiment to myself. In my 37 years in the IT industry, I have been employed by mainframe software vendors for about half the time, and have been a customer of mainframe software vendors for the other half. I’ve seen things from each perspective, and even when I’m on the vendor side of the table, I can usually sympathize with the customer’s situation.
For mainframe software customers, the situation is a difficult one these days. Escalating costs, skilled technician shortages and inflexible legacy systems are all realities, though self-interested writers, vendors and service providers amplify perceptions so that the realities appear more alarming than they are (as Allan Zander observed recently).
Through the years, I have had opportunities to argue both sides of the keep-the-mainframe / dump-the-mainframe debate. I’ve generally been a mainframe proponent, and for a long time I was one of those who dismissed the “PC platform” as unsuited to business-critical workloads. But about 10 years ago, a smart guy from Microsoft told me something that changed my thinking. He made the very accurate observation that you don’t run enterprise workloads on a standalone Windows laptop—you run them in a data center that has implemented all the systems management tools and procedures that we have traditionally associated with mainframe environments. So, it’s not so much the platform, but how you deploy, operate and manage it. This means you can take a pragmatic approach to choosing where to run—or rehost—various workloads.
All that said, sometimes when I’m meeting with a customer I just get stopped dead.
A few months ago, I met with the deputy CIO of a large public agency that has 17 decades-old, still actively developed, major applications that leverage three different mainframe databases, at least four different mainframe programming languages (including Assembler), and full suites of mainframe system management and utility products from all the usual large mainframe vendors. Many of the applications are integrated across platforms and have modern, customer-facing user interfaces. There appear to be no issues with providing end users with required functionality, and little concern about skills availability. At least one of the applications averages 3,000 concurrent users, with daily spikes up to 5,000.
The deputy CIO told me that he envisions a future for the agency’s applications based solely on .NET and Microsoft SQL Server. Fair enough. But he went on to say that a consultant had done a study and determined that they could run all the mainframe systems on “half a blade” of a Windows server, and he was planning a project to migrate everything to the Microsoft stack in 18 months. He justified this by saying he is not anti-mainframe, but “mainframes solve a lot of important problems; we just don’t have any of those problems.”
I was struck speechless.
Evidently, the consultant had done some back-of-the-napkin calculations using readily available sizing methodologies. These methodologies can offer misleading results, ones that don’t portray the real-world implementation that would ultimately be required: with virtualization, failover, disaster recovery and a plethora of both production and non-production instances.
I know today’s powerful multiprocessor, multi-core servers can deliver tremendous throughput and support for extreme workloads, but the “half a blade” assertion leaves the impression that the mainframe can be replaced with a server costing a few thousand dollars. In fact, after acquiring and deploying the requisite hardware and software to operate the application workload in a properly managed data center environment, it’s a safe bet that the running costs are going to continue to be a large fraction of the mainframe costs.
Still, that means savings, right? Perhaps, but the economics of migrating those 17 major, heterogeneous mainframe applications off the mainframe are unlikely to realize a return on investment for at least a decade. And that’s to deliver “like for like”—just recreating the same functionality as what is available now on the mainframe, but involving an enormous migration and retesting effort. If the agency’s key problem is cost, it’s doubtful that this approach will mitigate it significantly.
To recap: the agency is operating an effective mainframe-based service that is reliable, secure and delivers high performance—thanks to the mainframe’s ability to “solve a lot of important problems” that I’m quite sure the agency has. Migrating to an alternative platform is probably technically feasible, but would require a costly and time-consuming process while delivering a questionable ROI sometime far in the future.
I resisted the urge to argue, because it was clear the person’s mind was made up.
But I note now that the agency has a new deputy CIO.
J. Wayne Lashley is Chief Business Development Officer of Treehouse Software, Inc., a global leader in mainframe data integration, replication and modernization. With an extensive resume of business and technical roles spanning four decades in IT, Wayne speaks with authority on mainframe issues as a vendor, as a customer, and above all as an advocate of mainframe technology.
Connect with Wayne on LinkedIn: www.linkedin.com/in/j-wayne-lashley