What is mainframe modernization? Well, a lot of vendors will tell you that it is the replacement of some, or all of your mainframe infrastructure, which is old and costly to run (or so they will tell you), with commodity servers (and their software and services), that will run your applications for less cost, and at the same time, they’ll run faster. Those are pretty bold claims. Their customers aren’t always happy, but the big words do sell well.
To be fair though, these types of companies actually do have plenty of happy customers. And that’s because there always have been folks running applications on the mainframe that are in fact better served by running them (or improved versions of them) on commodity servers.
But that’s not the only way to “modernize” a mainframe. What are the actual aspects of the mainframe that require modernization? Well, it comes down to three main attributes: cost, professional skills and modern interfaces – and that’s really it. So let’s take a closer look at all three of these “problems” you’ve been saddled with by your mainframe systems.
Cost
There is no argument that mainframe computing comes with a hefty price tag. And if you’re running non-business-critical workloads on a mainframe, that’s going to be pretty painful. If that’s the case, call one of those vendors I mentioned at the start; one of them can certainly help you. However, if your mainframe processes 75% of your revenue, and you’re running high-intensity transaction processing, you’re probably using the most cost-effecting computing solution on the planet for those kinds of workloads.
The cost of the mainframe includes the hardware (very costly), the software (costly), power consumption, support personnel, and other things. Replacement servers are very cheap per unit, but costly when you run hundreds or thousands of servers. The server software is also pretty reasonable per server, but can become outrageous for hundreds or thousands of servers. Same goes for support personnel, power and air conditioning, etc. If you really care about the bottom line, you had better be prepared to look closely at this. You could start by reading a paper that details the cost advantage of the mainframe for large-scale transaction-intense workloads.
Professional skills
Yes, we are running out of personnel with mainframe experience; they’re getting older and they’re retiring. But is that expertise dying off? No. Is it really impossible to find experienced mainframe people? Well, no. It’s just hard to find young people with mainframe experience. Actually, I might go as far as to say that the seemingly diminished availability of mainframe skills is in reality an artificial shortage.
You see, for years, many IT organizations have followed the bad advice from Gartner to implement BiModal IT – to divide IT into two groups: the “cool kids” group running new technologies, and the legacy group running the mainframe. The last part effectively puts the mainframe and all of the people associated with it into a silo, where no new development, no hiring, and no new purchases are “needed.” The natural result is that staff attritions are not replaced; in fact any downsizing is disproportionally targeted at the unpopular silo, which in turn causes a predictable “shortage” of mainframe expertise.
There are people with mainframe experience out there – any decent headhunter can find one for you. However, finding a young, cheap programmer with mainframe experience? That’s harder. But what about your own people? Are COBOL and JCL skills beyond today’s millennial computer nerds? Certainly not. Seriously, you could train your own people, or hire new grads that can do that work. Look what’s happening in India – mainframe support has been outsourced to that country for years, and now there are legions of mainframe people in India. The “shortage” can be overcome.
Modern interfaces
Yes, the green-screen interfaces are cumbersome and often hinder user workflows. Yet organizations still use them. Why? Mostly, they don’t want to invest in new computer architecture, or they’re putting off the inevitable. They are wary of the risk that such a paradigm shift might cause – the interruption of daily business, the possibility that the shift won’t work, or will end up costing more, or any number of unknowns. And that is a completely understandable response to a pending change that is fraught with risk, and guaranteed to cost millions of dollars in capital expense.
Does that mean these IT organizations are stuck with green-screen interfaces? No. Clearly a migration would result in new interfaces, but there’s no reason why new interfaces can’t be applied to the mainframe. It’s just a matter of finding the right way to do it.
Progressive modernization
Since the three big reasons for “mainframe modernization” are separately solvable without necessarily diving into a full-on migration process, what modernization techniques are available now to modernize your mainframe systems? Well, it turns out that there are plenty. Probably the most pressing is the user interface issue, and there are many ways to tackle that problem. Similarly, there are solutions that can ease your reliance on maintaining legacy code. If cost is an issue, there are creative ways to put a big dent in that. And if performance is an issue, there are some very clean solutions for that. Finally, improved IT transparency can help to solve many of these challenges.
Interface modernization
There is actually a long history of solutions for the modernization of mainframe green-screen interfaces. The first were screen scrapers, which still exist today, that capture and convert character data, or capture bitmap data. Some emulators used user macros that could drive up mainframe resource usage costs. More adventurous techniques involve actually redesigning some of the legacy code. These solutions all present some level of risk – rising costs, significant redevelopment costs, and so on.
Today the biggest demand is for mobile access to mainframe applications, and today there are solutions that actually leverage the legacy code base to drive new mobile interfaces for mainframe green-screen applications. And the good news is that these tools leverage legacy applications as they are. Legacy applications contain years’ worth of intellectual property, and run fast and reliably. These advantages are leveraged – no new mainframe-side processing takes place; in fact, it need not be modified at all. And that leads us to code modernization.
Code modernization
Today, there are solutions that can leverage all of the code design work done on COBOL programs for the past decades, and help you to move seamlessly into the future (where there may be a continuing shortage of mainframe COBOL, JCL and assembler language expertise). Some of these solutions translate code into various types of distributed-systems flavors of COBOL, however, they are generally limited to smaller projects, where a re-platform will not affect performance. For larger projects, costs quickly get out of control when matching previous levels of throughput performance, 5-9’s reliability, redundancy, and horizontal AND vertical scalability on another platform.
Better solutions allow you to leverage existing code as it is, without a major redesign, re-engineering or complete migration. For larger projects, leveraging what is in place is the fastest and more economical way to modernize. New code can be initiated that can interwork with legacy code – new business rules and business logic can be used to augment the legacy code base, using younger, cheaper programmers, using modern toolsets and programming languages. And that code can run anywhere – on your mainframe, or on other platforms.
Cost modernization
While running mainframe systems cannot truly be considered in and of itself a cost issue (see above), there are many ways to optimize mainframe operations without making changes to code logic, databases, and platform hardware. One is high-performance in-memory technology, which can sharply reduce the amount of CPU and MSU resources used by your mainframe applications, thereby reducing their impact on the monthly bill. Similarly, smart performance capping can reduce cost – some of the best solutions can do that without actually capping business-critical workloads.
Performance modernization
One tried and true method to improve performance is a general systems upgrade – adding processor cores, memory and other hardware onto your existing machines, or even an upgrade to the newest mainframe system (z14), if you haven’t already done that (some might tell you to go back to z12, but that’s another discussion :-). Upgrading some system software can also improve performance. These solutions will of course, come with an increase in operations cost. However, some of the same solutions that help control costs can also make a big difference in performance as well, without adding to your monthly bill – for example, in-memory technology can improve application performance as well as database performance (in cases where many database applications are optimized).
Transparency modernization
As you know, tremendous amounts of IT data is saved every hour of every day on all of your systems, both your mainframe systems and midrange servers; enough data that you could realistically call it your own “IT Big Data.” All companies leverage this data at least for the purposes of paying the monthly licensing bills. The more serious IT organizations also use this data to look at efficiency and to glean some analytical insight.
Going beyond that, however, is where you can make a quantum leap – and that means IT business intelligence. By adding business structure and costing information to your IT data, it becomes possible to measure who in the company is using which resources, and how much that is costing. It can also help to measure the immediate effects caused by business changes (company mergers and acquisitions, process changes, new product introductions, etc.). The power of IT’s own data can help change the position of IT from being just a huge cost center into a window into general business efficiency.
Moving forward
Nobody will argue that today’s IT systems must be modernized to handle the new and changing demands of tomorrow. And there are as many ways to do that as there are bits of data in your cell phone’s memory card. But don’t let anyone define for you what “modernization” means – it doesn’t mean use Vendor A’s specific (and possible inflexible) software solutions, and it certainly doesn’t mean to suddenly or even gradually dump your existing high-value and mission-critical IT assets into the landfill. So if you’re running a mainframe – the very best system on the planet for processing business data – and it’s generating 60 or 75 percent of your revenue, find someone who will actually modernize it for you, not just replace it with their own who-knows-what…
Regular Planet Mainframe Blog Contributor
Allan Zander is the CEO of DataKinetics – the global leader in Data Performance and Optimization. As a “Friend of the Mainframe”, Allan’s experience addressing both the technical and business needs of Global Fortune 500 customers has provided him with great insight into the industry’s opportunities and challenges – making him a sought-after writer and speaker on the topic of databases and mainframes.
I support mainframe as like digital .
Thank you, Allan, for an accurate, unbiased, educational article on modernization!
Great article Allan, breaks down this big topic well.
I would add (and as an IBM’er i am biased), that we see Mainframe being seen more and more as an integrated part of a more fluid landscape in large enterprises greatly enabled by innovation in integration and development tools.