in-memory technology can save you

When people hear talk about in-memory technology, they immediately assume the discussion revolves around Big Data and analytics being run on distributed systems, and that the database part of the discussion focuses on things like SAP HANA, TIBCO, VoltDB, or even IBM BLU Acceleration. While these are all great products, and distributed computing is everywhere, they’re mostly irrelevant in a mainframe-centric discussion.

Now some mainframe folks might rightly start thinking about buffering, which is a type of in-memory technology; it certainly applies to DB2 databases, and is an everyday concern for mainframe folks from the DBA through to the CIO. But that’s not what I’m talking about. I’m talking about mainframe high-performance in-memory technology that has been, and continues to be as revolutionary for the mainframe world – as those distributed systems products mentioned above, are for the distributed world. Most of the big banks use mainframe in-memory technology, and if you’re not using it, it could help you to solve some of the most serious problems that you’re having right now.

Here are just six ways mainframe in-memory technology can save you:

1. Accelerate your batch applications

Accelerating batch applications is the most direct way in which mainframe in-memory technology can make a difference in the life a CIO: it will make a noticeable difference in application performance as well as resource usage from the application’s perspective, and it will deliver the most immediate ROI.

It works by allowing a batch app to read the most often accessed reference data using a very short code path. To do this, small amounts of data are copied into high-performance in-memory tables, and are accessed from there using a simple API. Using this technique, some applications have been made to run 100x faster. Even modern, well designed batch applications can still be made to run 30x faster or better. This can mean an overall savings of as much as 5 percent in monthly billing. And no changes to the database are required at all.

2. Accelerate your online applications

Optimizing online transaction processing applications using high-performance in-memory technology is really no different than it is for batch applications, from a technical perspective. You pick the data that you access most often, copy it into high-performance in-memory tables, and get the application to access data from there using an API and a really short code path. 95 percent of the rest of the data accesses are made directly from DASD as before.

The big difference is in identifying the data that is best suited for optimized access. For batch applications, it’s the data that is accessed hundreds of thousands of times in an hour, or in a batch run. For OLTP applications, it’s a little different. You may have to look for the data that is accessed a few hundred times for every online transaction over the course of an hour or a day. It can be a little harder to identify, but once done, the savings in processing time, resource usage and related costs are no less than they are for batch applications.

3. Accelerate your DB2 database

While in-memory technology that is applied to applications can do nothing directly to optimize DB2, the application optimization by itself, does have a beneficial effect on the database. In cases where multiple applications (both batch and online applications) can be optimized using application-specific in-memory technology, the cumulative effect can result in significant improvements to overall DB2 performance.

Even in cases where the benefits as applied on an application-by-application basis are minimal, the cumulative effect can be significant. The result can be a virtual increase in system throughput capacity and a significant reduction in overall DB2 resource usage or, alternatively, an opportunity for a big increase in workload adoption.

4. Solve the worst business rules maintenance challenges

Believe it or not, there are still some organizations that run mainframe applications that contain embedded business rules within them. Why would they still do this? For one of two reasons: either because they are legacy applications that they are planning to replace or, more likely, because there is an urgent business need for the uber-fast processing of business rules. An unfortunate side effect is that rule maintenance is extremely painful and time consuming, involving program recompiles.

In-memory technology can solve this problem by externalizing rules into high-performance in-memory tables and accessing them at program speed, using a very short code path. Business rules are easily maintained, and do not require program recompiles, or any other such complication.

5. Change legacy packaged applications into faster in-memory applications

IT organizations sometimes run packaged applications for which there are no possibilities for editing or optimizing. Others just have legacy applications without the in-house skill sets to make changes or optimize. Still others are concerned about the risk involved in making changes to legacy products. These are all valid concerns, and justify the decision to say no to code changes.

There are solutions even for these folks: an in-memory solution with an SQL interception/redirector can change these legacy DB2 applications into better-performing in-memory DB2 applications without changing a line of code.

6. Save on spend

For the most part, mainframe in-memory technology makes things run faster. It does this by allowing processing to take place using fewer system resources—less CPU, less I/O, and less MSU. And that translates directly into cost savings: I’m talking about your monthly bill and what that means come budget time.

You may have realized that these are some of the biggest challenges for any organization running mainframe systems at the core of their business operations. In fact, it is likely that you are struggling with one or two of these challenges right now. My advice to you is to look into mainframe in-memory technology. In-memory technology is not only for the sexy new distributed systems. It never has been, it’s been running on mainframe systems for decades, and is still being used by virtually all of the biggest banks and insurance companies in the world.

Do some due diligence and look into it. I’m willing to bet that you can solve at least one big problem this year with mainframe in-memory technology.

Regular Planet Mainframe Blog Contributor
Allan Zander is the CEO of DataKinetics – the global leader in Data Performance and Optimization. As a “Friend of the Mainframe”, Allan’s experience addressing both the technical and business needs of Global Fortune 500 customers has provided him with great insight into the industry’s opportunities and challenges – making him a sought-after writer and speaker on the topic of databases and mainframes.

Leave a Reply

Your email address will not be published. Required fields are marked *