Embracing In-Memory Technology

Organizations are always encouraging their IT professionals to obtain the highest level of performance out of their applications and systems. It only makes sense that business want to achieve value at a high level of return on their investment in IT.  Of course, there are many ways of optimizing applications and it can be difficult to apply the correct techniques to the right applications. Nevertheless, one area that most organizations can benefit from is by better using system memory.

Why is this so? Well, there are three primary factors that impact the performance and cost of computer applications: CPU usage, I/O, and concurrency. When the same amount of work is performed by the computer using fewer I/O operations, CPU savings occur and less hardware is needed to do the same work. A typical I/O operation (read/write) involves accessing or modifying data on disk systems; disks are mechanical and have latency – that is, it takes time to first locate the data and then read or write it. Of course, there are many other factors involved in I/O processing that involve overhead and can increase costs, all depending upon the system and type of storage you are using.

So, you can reduce the time it takes to process your batch workload by more effectively using memory. You can take advantage of things like increased parallelism for sorts and improve single-threaded performance of complex queries when you have more memory available to use. And for OLTP workloads, large memory provides substantial latency reduction, which leads to significant response time reductions and increased transaction rates.

But storing and accessing data in-memory will eliminate mechanical latency and improve performance. And many organizations have not taken advantage of the latest improvements of IBM’s modern mainframes, like the z15, with up to 190 configurable cores, and up to 40TB of memory. That is a lot of memory. And even though you probably do not have 40 TB of memory configured on your mainframe system, chances are you have more than you use. And using it can improve the end-user experience!

Therefore, improved usage of memory can significantly improve the performance of applications, thereby enabling business users and customers to interact more rapidly with your systems. This means more work can be done, quicker, resulting in an improved bottom line.

Growth and Popularity of In-Memory Data

Customers continue to experience pain with issues that in-memory processing can help to alleviate. For example, consider the recent stories in the news about the shortage of COBOL programmers. New Jersey (and other states) put out the call for COBOL programmers because many of the state’s systems use mainframes, including their unemployment insurance systems. With the COVID-19 pandemic, unemployment rates have risen dramatically, causing those systems to experience a record demand for services.

Many of these stories focused on the age of the COBOL applications when they should have focused on the need to support and modify systems that run their states. COBOL is still reliable, useful, and well-suited for many of the tasks and systems that it continues to power. It is poor planning when you do not have skilled professionals to tend to mission-critical applications. And in-memory data processing could help to alleviate the large burden on those systems allowing them to respond quicker and process more users.

We also need to consider the modernization of IBM’s mainframe software pricing. Last year (2019), IBM announced Tailored Fit Pricing (TFP) to simplify the traditionally very complex task of mainframe software pricing and billing. This modernization effort strives to establish a simple, flexible, and predictable cloud-like pricing option. Without getting into all of the gory details, IBM is looking to eliminate tracking and charging based on monthly usage and instead charge a consistent monthly bill based on the previous year’s usage (plus growth).

But TFP is an option and many organizations are still using other pricing plans. Nevertheless, a more predictable bill is coveted by most organizations, so TFP adoption continues to grow. Successfully moving your organization to TFP involves a lot of learning and planning to achieve the desired goal of predictable billing at a reasonable cost. That said, it makes considerable sense for organizations to rationalize their software bills to the lowest point possible the year before the move to TFP. And you guessed it, adopting techniques to access data in-memory can lower usage – and possibly your software bill. Optimizing with in-memory techniques before moving to TFP makes a lot of sense if you want lower software bills.

DataKinetics’ tableBASE: An In-Memory Technique

It should be clear that in-memory optimization is a technique that can improve performance and save you money. But how can you go about optimizing your processes using in-memory data?

Well, there are several different ways to adopt in-memory optimization for your applications and systems, but perhaps the best approach requiring the least amount of time and effort, is to utilize a product. One of the best in-memory data optimization products is DataKinetics’ tableBASE, a proven mainframe technology that manages data using high-performance in-memory tables. The product is ideal for organizations that need to squeeze every ounce of power from their mainframe systems to maximize performance and transaction throughput while minimizing system resource usage at the application level.

Although every customer deployment is different, using tableBASE to optimize in-memory data access can provide a tremendous performance boost. For example, a bank that was running up against its batch window deployed tableBASE and batch runs that took more than 7 hours in total to finish, completed in less than 30 minutes in total afterward. That is an improvement of more than 70 percent!

And tableBASE is a time-tested solution having many customers that have used it to optimize their applications for decades.

The latest news for tableBASE is that IBM has partnered with its vendor, DataKinetics, to deliver an advanced in-memory data optimization solution for Z systems applications. So now you can engage with DataKinetics and implement tableBASE, or work with IBM and their new IBM Z Table Accelerator. Both options can help you implement an advanced in-memory data optimization solution for your Z systems applications.

The Bottom Line

The current resurgence in using in-memory optimization techniques is being driven by organizations that need to improve performance, lower costs, and utilize every bit of their mainframe investment. I have to say, that just sounds like good sense to me!

Originally Published at: Data and Technology Today

Regular Planet Mainframe Blog Contributor
Craig Mullins is President & Principal Consultant of Mullins Consulting, Inc., and the publisher/editor of The Database Site. Craig also writes for many popular IT and database journals and web sites, and is a frequent speaker on database issues at IT conferences. He has been named by IBM as a Gold Consultant and an Information Champion. He was recently named one of the Top 200 Thought Leaders in Big Data & Analytics by AnalyticsWeek magazine.

Leave a Reply

Your email address will not be published. Required fields are marked *