Everyone is familiar with the promise of AI and machine learning. Like a human, these intelligent systems are supposed to be trainable through formal means, and also able to learn by deriving meaning from data that is presented to them. Of course, that’s the theory. In practice, results vary in large part based on the quality of the data available. In too many cases, data sets include limited historical information. So, you might be able to tell which way the wind is blowing now, but you may not be able to discern the direction of the prevailing wind.
In other words, your results will only be as good as your data.
Historical Data and the Mainframe –
Real time gets a lot of attention and is certainly an important aspect of the mainframe, but one of the most distinctive and valuable aspects of mainframe is actually the long-term historical data it has accrued. This deep historical perspective is invaluable. However it can be difficult to make the most of this data in a busy Big Iron environment, where analytics can be costly and complex to perform.
Yes, of course, AI can be done on the mainframe, but just because it is possible doesn’t make it the best choice. From a technical standpoint, there are other platforms that can do the job better, cheaper, and faster. Those platforms are in the cloud.
Machine learning models are based on patterns that have accumulated in data over the years. That’s why there is so much potential value in incorporating all available data, not just the most recent. When you unlock mainframe historical data in the cloud you can uncover insights and even improve machine learning models by incorporating a view of historical patterns.
By developing this new capability, you can take all that mainframe data to a cloud environment that’s more optimal for transformation, analysis, and insight. Now, in fact, you can operationalize your machine learning models in ways that you could never really do before.
Leveraging Economies of Scale
Economies of scale is essentially a code word for cloud computing—just as Big Iron is code for mainframe. What’s the point? The collective IT experience with cloud over the past 15 years is that moving workloads from a monolithic architecture, like mainframe Big Iron, to a distributed architecture in the cloud is invaluable when it comes to new paradigms like AI and machine learning where economies of scale are crucial. In fact, cloud scale can’t really be compared to any on-premises options.
That’s because the cloud enables you to spin up capacity at any scale and for any duration—which is perfect for many AI and ML tasks, especially those that involve extremely large data sets. But, whether a large task or something more routine, the fundamental pay-as-you-go, scale-up or scale-down flexibility of cloud means you now have a game-changing but affordable option.
This is far from being a polarizing discussion about whether to be completely committed to mainframe or completely committed to the cloud. It is, instead, a discussion about engaging the best tool for the job.
If the mainframe works for systems of record and intense transaction processing, let it continue. But it is now possible to do so while offloading much of your storage and data management workloads to the cloud. Keep the business logic, keep the processing transaction logic, keep the throughput of data but then combine the sheer processing power of Big Iron with the Big Data in the cloud’s economies of scale. Then you get a win-win—the best of both worlds.
Cloud, Mainframe, and Hybrid IT
The imperative is bringing the cloud closer to the mainframe in a hybrid model so both ecosystems complement each other and continue doing what they do best.
While everybody’s talking about hybrid models, how do you actually create a hybrid model that enables you to optimize the workloads you have for the business logic in the mainframe? How do you also unlock access to historical data that you can plug into the cloud so that you can leverage economies of scale to do AI cost effectively, so that you can get that sweet spot of a hybrid model that works for you?
While we have established that two domains, cloud and mainframe, are complementary, they are also each necessary for many enterprises to achieve their business goals. Offloading the data processing and associated workloads involved with data management to the cloud builds a bridge so the separation between the two is not an issue.
In other words, you don’t have to choose between mainframe and cloud anymore. it’s not an either-or choice. That’s the essence of hybrid.
A Renewed Interest in Legacy Data Sets Transcends Silos
As with any business challenge, technology is just one side of the solution. Because mainframe data sets have been locked away in proprietary formats and stored on sluggish storage platforms (such as physical tape and virtualized tape libraries), adding them into the data analytics pipeline tends to be complicated by corporate bureaucracy. Once the technological barriers are removed, the challenge still remains of rallying multiple organizational stakeholders around the imperative of liberating mainframe data sets and making the best use of this amazing resource. But that’s a challenge which results can soon solve. Blaze a path to hybrid cloud—with potent AI and ML analytics—and the organization will follow!
With over two decades of hands-on experience in enterprise computing, data centers management, mainframe system programming and storage development, I’m now on a mission to accelerate cloud adoption at large enterprises by making their most trusted core business platforms more flexible, affordable and cloud compatible.
Connect with Gil on LinkedIn.