Michael Porter at the Harvard Business Review recently said that the Internet of Things (IoT) will deliver an economic boom as big as those delivered by computer automation and the internet. That may well happen, but we’re going to have to figure out how we’re going manage all the new workloads that it will also deliver. So what does that mean? Well, in 2009 there were 2 billion devices – PCs, smartphones, etc., connected to the Internet; by 2020, there will be over 7 billion, according to Gartner. Compare that to the number of “things” connected to the internet – 0.5 billion in 2009, and 26 billion by 2020, also according to Gartner.
Think about what those numbers mean. If there are 2 or 3 times as many smartphones, tablets and notebooks online in the few years, doesn’t that translate into 2 to 3 times as many transactions initiated through by those users? What does that mean for the back-end systems that have to handle all those new transactions without delay? Are they in danger of running out of capacity? Are they already maxed out or getting close? What’s the plan moving forward? Think of Obamacare, Best Buy black Friday and the South West Airlines website- all crippled by lack of capacity, which hurt those organizations through bad press and/or the loss of quite a bit of business revenue. If many more businesses’ systems are in danger of this type of capacity failure, then there needs to be a plan to address that shortcoming. Well, at least there had better be; the storm is coming, ready or not.
A more pressing worry for the future in terms of managing capacity, is the number of “things” attached to the internet. Think about remote medical sensors, temperature sensors, vehicle monitoring sensors (speed, location, acceleration, deceleration, etc.), sound sensors, utility status sensors (electrical current, water pressure, storage tank status, etc.), shipping container status, bridge, building and road status, and much more. Then think about the data stored up for all those sensors to allow data mining for status trends. Much of this data will be updated many times per day, per minute or in many cases, per second. Each unique update is a transaction, consisting of data accesses, database storage, and many computing cycles. And this activity is going to increase by 20x or perhaps 100x depending upon whose reports you read. All this will seriously compound the computing capacity crisis that is already threatened by an increase in user devices
So how will current systems provide all this extra capacity? Are we ready to just rip-and-replace our current data centers with newer and bigger data centers? What will the cost for that be? Remember, we’re talking about an increase in 10s or 100s of times current storage capacity and computing capacity. And all that computing has to be fast; it can’t slow down to a crawl.
Well, we have actually stared into the abyss like this three times before- first, with the advent of ATM machines, which stressed banking systems of the day considerably- but those systems were mainframe-based, and they managed to cope with the increase in demand. Second, there was the e-commerce revolution, which again stressed backend (mainframe) systems; but the front-end systems were newer distributed systems, and they didn’t cope very well initially. Third, is the current mobile revolution, which is once again stressing backend systems, but we are coping- backend mainframe systems are once again able to adjust to the bigger workloads, while private and public cloud infrastructures are helping distribute loads nicely
However, IoT is a different animal. With previous disruptive workloads, demands for data occurred at anytime, anywhere, day or night – and systems were eventually able to handle those inconsistent peak demands. With IoT, those peak demands are going to be less peak-like, and more flat, but at those higher, peak-like levels. Most current systems aren’t going to be able to handle that, and the cloud isn’t going to help this time. The exception is mainframe systems—they’re designed to handle long-lasting peak workloads – in fact banks use them today for their most intense transaction-processing environments- online (think ATM machines and mobile banking) and batch (periodic processing for account reconciliation, payment processing, billing, reporting, etc.)
In fact, you could look at what IoT is going to deliver as a series of ongoing, never ending online and batch jobs. Organization that will have to handle IoT data are going to have to take a long hard look at mainframe computing systems if they hope to handle the new workloads as efficiently and as cost-effectively as possible.
Mainframe systems, properly provisioned and effectively optimized are the best bet to handle this new disruption—they are already the best systems in the world for handling large increases in throughput capacity. So upgrading and optimizing these existing mainframe systems is going to go a long way to helping IT organizations cope with the unforeseen increases in demand on system capacity. Why do I say that? Well, every second, 1.1 million high-volume customer tr
nsactions occur on mainframes. To put this into perspective, Google experiences almost 60,000 searches per second. That is nearly 100 times the volume of Google and the newest mainframe systems are far from pushing the theoretical limit of the machines.
Preconceived platform biases are going to have to take a back seat to reality here, folks. If there is a new IoT economic boom right around the corner, AND you’re running mainframe systems now, you’re fortunate enough to be on the fast track to prepare yourself for it. And those who are ready first will have a competitive advantage over those who are not ready…
Regular Planet Mainframe Blog Contributor
Allan Zander is the CEO of DataKinetics – the global leader in Data Performance and Optimization. As a “Friend of the Mainframe”, Allan’s experience addressing both the technical and business needs of Global Fortune 500 customers has provided him with great insight into the industry’s opportunities and challenges – making him a sought-after writer and speaker on the topic of databases and mainframes.
I agree mainframe systems are well positioned to capture the increased transaction IOT data, particularly (at least for now) when security issues are involved.
On the other hand, can they (or anyone else) handle simultaneous ad hoc analysis (nevermind reporting) without reverting to such approaches as cubes, warehouses or attached processors handling largely pre-defined analyses?
I see a second coming of artificial intelligence coupled with a dynamic version of what was once firmware (think of the latest z/OS advances, the caching strategies.)
On the other hand, local processing is critical.
The tailing of customers with online devices in retail stores (IOT) will be enhanced when cameras further identify the customers, capturing & quantifying body language for not simply messaged offers, but a targeted sales approach. I.e. direct or seemingly accidental, male/ female/etc, well dressed or casual, possessing a knowledge of the customer’s prior history.
The targeting algorithms may be driven by mainframe data, but much of the active customer analysis & direction would be locally processed with callouts for limited items of mainframe data.
Pro-active security applications, in schools, transit stations & hubs or smart personal devices for health, fire, police or military personnel (or even patients!) could use, but should not rely on the mainframe.
Our friend over at Dancing Dinosaur agreed with this, and wrote about it back in 2014: https://dancingdinosaur.wordpress.com/2014/08/04/put-the-mainframe-at-the-heart-of-the-internet-of-things/