Mainframe Workloads

Part 1 of Reg’s article was published last week.

Nothing Exceeds Like Success

That’s right: last week’s article brought you to this point, the gist of what we need to be preparing for. If, as I have suggested, the mainframe is on its way to becoming a definitive gravity well for quality business computing, we’re in big trouble, because while the mainframe technology can scale up indefinitely, our business and technical practice culture has gotten comfortable with a scale we reached decades ago, and we’re in danger of being shattered by sudden exponential change in the number and range of activities using our platform of choice.

Interestingly, one of our greatest areas for potential survival is the fact that we’ve been so slow to bring new people onto the platform, so just on time for an explosion of mainframe capacity usage, we’ll be bringing a massive cohort of new people onboard because we’ve put off hiring them until now, and there’s nothing like unlimited neophytes to adapt to an emergent new world of opportunities and challenges. And, of course, these new people will be distributed around the world, not just concentrated around the traditional mainframe geographies, both because of the need for offshoring to fill the talent gap, and because organizations around the world will begin discovering the ability to use gossamer-thin slices of mainframe to meet their emerging business requirements, whether over the web or other cloud approach, or on premises.

Let me paint you a picture of a legitimately possible scenario we’ll have to face over the coming years in order to get you ready to adapt.

First, the number of Linux people using the mainframe is likely to vastly outstrip the number of traditional z/OS (including USS), z/VM, z/VSE and z/TPF people. Now that Linux can run under z/VM as guests, or under z/OS in containers, the penguins will make tribbles look like a few ellipses. As IBM, and hopefully many other “cloud” providers, start to make mainframes available for cloud-like usage, especially if they make very granular and highly-configurable pricing models available, for example based on Tailored Fit Pricing, and then moving to automated graphical purchasing interfaces, the world is bound to discover the RAS that even Linux is capable of when contained by a well-managed platform.

Of course, the vast majority of those Linux people will neither know nor care that they’re running on mainframes under the covers. But as the rest of the world discovers cloud Linux on mainframe as a

service, traditional mainframe shops may begin to raise their gaze above their internal-political navels and realize that the resistance their non-mainframe people have to moving new workloads to the mainframe is now clearly hurting their businesses.

This will be the great tipping point in the history of the mainframe: when organizations that have had a great political canyon between their non-mainframe and mainframe people are forced to deal with the fact that everyone who cares about cost, benefits and quality of service is taking the mainframe seriously, and moving away from all the consumer electronics machines that can’t get beyond 30% busy even when they’re not bogged down by viruses, trojan horses, hackers, and other distributed maladies.

The cascade of people falling into the mainframe will likely collide with the cascade of people who are trying harder than ever to move off of the mainframe due to reasons ranging from undeserved negative reputation and aging workforce to reports of a small number of mainframes getting hacked. And caught up with all the flotsam and jetsam will be experienced legacy mainframers, who would rather retire than adapt, but suddenly realize how vastly different their futures will be if they get on board with the future of the mainframe.

Shooting up through this miasma of distributed functionality flooding the mainframe will be a rediscovery of the even greater power of the mainframe operating systems – perhaps especially z/OS. However, it is somewhat harder to have a thin slice of z/OS without any local expertise, even if someone else is hosting it, so the traditional specializations on the mainframe will begin to re-emerge with a vengeance, and the bandwagon will start to fill up with people who want to call themselves mainframers because they once logged on to TSO/E.

As a consequence, the need for certification, which IBM, Interskill, SHARE and other organizations that offer mainframe training have so well foreseen with their digital badges, will lead to a rediscovery of the value of certified mainframe training – but often by experienced people with prior careers, or people without degrees in IT, who are realistic enough to ignore the anti-legacy biases that many computer science degrees impress on most graduates.

In some ways, this will leave us working with a commodity-utility model of computing that many have aspired to for much of the history of IT, as people who don’t care about the underlying platform find themselves landing on the shores of mainframe as a service without even knowing or caring what their cloud became. But it will also open up the entire world of mainframe, currently so insular, to becoming a default platform and career choice for many, most of whose ideas about computing were formed well before they encountered the ultimate legacy environment.

And this is where we will all find ourselves when the advice I now offer reaches peak relevance.

After the Flood

Will the day come when computing becomes an established commodity utility and profession? Maybe, if you define it in terms of the hardware and operating system, network connectivity, services, and

professionals who run it. Then platform providers will all settle into the role of infrastructure providers like the telcos, and humanity will continue on to the next adventure, utilizing that infrastructure to build further innovations to enable us to achieve our optimum next futures.

It might even happen in our lifetimes. But if it does, you may not notice, because you’ll be too focused on taking advantage of the next layers up of functionality that these services enable.

But before we get there, we have to pass through the winnowing out of extraneous platforms and the establishment of those that were built to last for the ages. And, let’s face it, in some ways the world was always waiting for the one platform that works to adapt to what they wanted from it, as we flirted with the consumer electronics pretenders while waiting for the best of what they offered to be available somewhere that wasn’t perpetually failing.

As we look at the history of computing, the platform that was built to last was designed to handle the full circle of what people needed from a quality computing environment, while temporary platforms have had important roles, allowing people to experiment with them and keep improving them because if it is broken, ya gotta fix it. Important innovations have aspirationally emerged as a result, and they are now being ported to the mainframe, or interfaced with it, depending on what works out to be most sustainable.

Stop. You’ve been coasting through this article waiting for the punch line. It’s coming next. But you’ll do yourself a favour if you re-read everything up to this point, either before or after reading what comes next, in order to receive and take advantage of the advice that follows.

Why? One of my favourite illustrations is the martial artist who puts their fist through a stack of boards or bricks. No matter how strong they are, if they’re just thinking at the surface, that’s where they’ll impact. But those who think through the task to the world where it’s complete will break on through to the other side. So, imagine yourself looking back from the world I’ve just painted, and start thinking how you’ll get there, and what you’ll do to maximize your participation and benefit from taking this journey.

Both Sides Now

OK, welcome to the day when you’ll look back on a truly great career, having surfed the waves of change while helping your organization participate and benefit beyond anyone else’s imagination. Innovations in information technology continue to exponentially grow, but the bottom substrates for system-of-record business processing are now firmly in place on the IBM mainframe that you bet your career on. What did you and your organization do to survive and benefit from this great re-emergence of excellence in computing?

Well, the first thing you did is thought this through carefully, and then contacted your peers and management and walked them through it as well. Maybe you even followed the storied example of the

emperor’s guard who wrote up what he believed to be the most powerful force on Earth and put it under the emperor’s pillow, leading to being sent to his own kingdom as a reward from that emperor (Google up “1 Esdras 3-4” for the story). Just don’t keep your mouth shut.

I know, I know: that’s the hardest thing for most mainframers to do. We’ve been trained to keep our heads down, our mouths just, and not rock the boat. Guess what: that boat’s about to be rocking and it’s going to need people at the wheel, sails and oars who know what’s going on, and what to do. Tell them.

Next, it’s time to take stock of your entire IT environment, strategic initiatives, platforms, server farms, and especially Linuxes, and find some way to get a cost-benefit comparison underway between where you are, where you are planning to go, and what it would comparatively cost to do on the mainframe using containers, guests, and the many innovations available on traditional OSes such as high-performance in-memory optimization of mainframe database applications.

Don’t stop believing! It’s going to happen, stay ahead of this.

Now, take a look at the pricing offerings for enhancing your mainframe presence, both on-premises and in the cloud. Carefully study IBM’s Tailored Fit Pricing and compare that to where you are and where you’re going. Consider other options for turbo-charging your mainframe costs as well, for example the innovative solution that can help you concentrate your software licenses on fewer mainframe images while using it across your environment. And start planning the future of your IT initiatives with an eye to saving vast amounts of money on extraneous, buggy platforms, ballooning IT staffing to deal with them, and even obsolete activities on your mainframes.

Communicate, communicate, communicate. Tell your peers. Tell your management. Write articles for your corporate newsletter and give educational lunchtime sessions. Tell the members of your local service and public speaking clubs. And then tell the rest of the world: you’re saving them from making the same mistakes you’ve avoided.

And if your non-mainframe colleagues give you grief, especially political grief, remember: they’re protecting territory that is no longer viable, and it’s your job to rescue your organization from becoming another Atlantis. Save who you can, beginning with the ship that will get you all to a successful future: your organization.

And while you’re having fun doing all of this, meet me at SHARE and we can talk about how it’s going. We’re all in this together, and the world will be a better place because of it.

Reg Harbeck is Chief Strategist at Mainframe Analytics ltd., responsible for industry influence, communications (including zTALK and other activity on IBM Systems Magazine), articles, whitepapers, presentations and education. He also consults with people and organizations looking to derive greater business benefit from their involvement with mainframe technology.

One thought on “The next frontier: when the mainframe becomes the default for new world-class workloads”
  1. This is an inspiring article and I wish I could participate… Unfortunately, I live in the wrong country :/

Leave a Reply

Your email address will not be published. Required fields are marked *