Eye of the Needle

In 1965, Gordon Moore observed that integrated circuits were taking less and less space at a geometric rate, an observation that came to be known, in various forms, as Moore’s Law

In 1966, the original Star Trek TV series began each episode with, “Space, the final frontier…”

Rewind to 1956, Isaac Asimov envisioned a computer that took less and less space, until it existed entirely in hyperspace in his famous short story “The Last Question.”

Concurrent with these milestones, the space race was underway, and all available computing power was in some way relevant to the effort to reach the moon – so much so that the Apollo program had a substantial impact on the availability and development of integrated circuit technology. But even IBM’s System/360, one of the most advanced computers of its era (and ever since in its successors) was not sufficiently miniaturized to be included onboard on the actual space vehicles, so it handled large-scale computing tasks on the ground while much smaller and less powerful computers were included in the space vehicles. One could see in this the concept of client-server computing, where there is a large centralized repository of data and computing and a smaller local device that interacts with it.

However, by the end of the 1960s, another parallel model for computing, peer-to-peer, was becoming more and more popular. Indeed, it was of the essence of popularity, as this democratic approach apparently avoided having a central point of power and control.

Well, it didn’t actually avoid it so much as ignored it and pretended it was going away. One could liken it to the concept in Douglas Adams’ “The Hitchhiker’s Guide to the Galaxy” series of the Somebody Else’s Problem (SEP) Field, a cloaking device that made people deliberately ignore something at a subconscious level because it wasn’t their problem. In like manner, the internet-connected peer-to-peer world of consumer computing was increasingly perceived as having no centralized important computers, just bigger and smaller servers that were all equal in their own unique ways.

If this seems like a rather nebulous approach to dealing with critical resources, well, that’s where things went next: explicit nebulosity—cloud computing. As has been waggishly asserted, cloud computing is just somebody else’s computer. Ergo, somebody else’s problem (as long as you can afford to pay them).

It’s rather interesting that, during this entire journey, the IBM mainframe platform continued to host the system-of-record workloads of the world economy, on a platform rich with science fiction references. Asimov had hyperspace? The IBM mainframe had hiperspace, a way to double the amount of addressable memory that a given address space could use. Star Trek had the USS Enterprise. IBM’s z/OS has Unix Systems Services (USS) for enterprise computing.

But, at the end of the day, whether on-premises or hiding in a geographically isolated data center, all this computing power had to physically exist somewhere. And take electricity, have a physical footprint, and require a capacity for its weight and cooling. Oh, and be secure, regulatorily-compliant, available, reliable, future-proofed, etc. At least, assuming you thought to negotiate those qualities of service into the cloud services contract.

But here’s the thing: having a system of record is a matter of some gravity—even a center of gravity—and some smooth peer-to-peer continuum concept doesn’t begin to address the requirements that are entailed. You have to get specific, to go from “cloud” to “could” when responding to those exigencies.

In other words, you can’t just use the concept of cloud computing to collapse all your critical computing into a single point and let someone else handle it. Yes, there is room for utility computing — if the qualities of service are ironed out and specified so that your fiduciary responsibilities aren’t just treated like dust in the wind. Just like the gravity in a singularity, there are some things you can’t escape. But you can take responsible steps to make them superable. Especially if the technological options are superable.

And that’s the real point: knowing what requirements connect directly to your organization’s survival goes beyond mere utility metrics. They aren’t somebody else’s problem, but for too long, architecture and platform decisions have been made that exclude essential matters because it was assumed that nobody or everybody had them, when in fact the IBM mainframe has had them all along, often uniquely so.

Those aspirational aspects of physical space, environmental friendliness, and conceptual simplicity have now become part of what the IBM mainframe clearly offers, even in a rack-mountable version with the advent of the z16.

That’s important, because, as we evaluate what the various cloud offerings “out there” such as AWS, Google, and Microsoft provide, having the ability to have a complete set of relevant metrics makes for responsible, professional business decisions that are not based on bias and incomplete criteria.

Of course, that doesn’t excuse cloudy thinking about critical business workloads and processing. But when the actual criteria and available options are no longer treated as conceptual zeroes, the division of responsibility and allocation of space cease to be fiction exercises.

Reg Harbeck is Chief Strategist at Mainframe Analytics ltd., responsible for industry influence, communications (including zTALK and other activity on IBM Systems Magazine), articles, whitepapers, presentations and education. He also consults with people and organizations looking to derive greater business benefit from their involvement with mainframe technology.