Performance Solutions

Last week we looked at some of the most common reactions to the need for improved mainframe performance in the datacenter. Not all are great solutions, but this week we look at six truly great mainframe performance optimization solutions.

Smart Technology Solutions

There are many smart technology solutions provided by IBM, the big VAR players, and the smaller specialty VAR players in the mainframe ecosystem. These are proven technologies that have been improving system performance and saving on costs in large corporate datacenters for years. Here are six of the best of the best:

1. In-memory Technology

People keeping up with the buzz in the tech business have been familiar with the concept of in-memory databases for five years or more, within the context of Big Data and data analytics. With the plummeting cost of memory, it only made sense to put as much data as possible into memory and to access it from there, bypassing the physical/mechanical I/O portion of data reads and writes. First there were hybrid disk/memory databases, followed by databases that are 100% in-memory. But it started long before that.

The truth is that in-memory technology has been around for a very long time—it has been with us since the beginning of computing—if you think of caching and buffering techniques. A completely different type of in-memory technology—high performance in-memory technology—has been running in mainframe systems for decades, and it continues to do so in most of the large finance and insurance systems running today.

It’s a tried-and-true technology that enables processing to finish faster, use fewer system resources (CPU, I/O, MSU), and to consume less operational costs. If IT organizations are looking for a magic bullet, a solution that improves performance at the same time that it helps to control cost, it’s right there under their noses.

2. IT Big Data and Analytics

Anyone keeping up with the buzz in the tech business is also aware of Big Data as it has been talked about to death, and then some. But that’s for a good reason: the enterprise data that large businesses possess—information about their customers, their location, their transaction records, their buying habits, you name it—is a highly valuable resource for extracting critical intelligence through the use of analytics. We know the value of the data, and we know that it can make all the difference in marketing and sales efforts, the bottom line, and the future of the business.

But customer data isn’t the only data that has been collected by IT organizations. There is another type of data that is being collected continuously, including right now as you read this article. It’s called IT data, actually, IT Big Data. It comprises all of the data that you’re logging on your servers, your distributed servers, both within your datacenters and within your service providers’ datacenters, as well as your mainframe server data.

This data contains information about your IT resource usage and, through the use of analytics, it can tell you how much memory your servers are using versus how much is available, how many CPU cores are configured versus how many are needed for your current workloads, etc. It can also tell you how much CPU and MSU you’re using, and who is using it.

This data is no less important than any other data you are collecting. It can tell you how much you are spending on IT resource usage right down to the department, application, and user. It can tell you how and where you can save money just by being more efficient. It can tell you how efficient your outsourcers are, and whether or not they are helping you or hindering you with cost optimization. It can help to turn your IT organization from a giant cost center to a window on the efficiency of your entire business.

3. Smart Mobile Development

There has also been a tremendous amount of buzz over how mobile is one of the biggest disruptors in business over the last few years—actually, for a decade or more. By now you know that failure to respond to your customers’ mobile needs is a direct invitation for them to start looking elsewhere; in effect, pushing your customers to your competition—businesses that are providing the mobile connectivity that your customers want.

Most large banks now provide some level of mobile capability to their customers. And most forward-thinking businesses are now moving in that direction. For companies that still do not provide mobile access to customers, especially those running mainframe systems in their datacenters, they face an uphill battle. Their mainframe applications were never designed for a mobile interface.

Many IT leaders in this predicament feel that the only way out is to duplicate the capabilities of their existing mainframe applications, but rebuilt from the ground up as mobile applications. Beyond the enormous cost of such an endeavor, it also means that COBOL applications, that were designed decades ago and updated on a regular basis, now have to be reverse engineered. The reason is that the original program designers are long gone, and anyone with deep insight into how they work has either moved on, retired, or looking to retire.

The smart way out is to use a mobile solution that allows new development to leverage the legacy mainframe applications, as opposed to duplicating them or replacing them. A sort of hybrid development solution. Such a solution removes most of the risk involved in providing mobile capabilities to a mainframe house. The mainframe applications have worked well for years, and still work flawlessly; why not leverage them, rather than reinvent the wheel?

Today’s programmers can use a solution like this as no new COBOL or even DBA work is required. With this technique, mobile programmers use a modern toolset to connect to the legacy applications, using the modern languages they are used to. They focus on the mobile interface, and do not have to recreate legacy logic. Development time and development cost using this technique is far less than any ground-up technique—by a factor of 10 or better. This is truly a low-risk, high-reward solution.

4. SQL Quality Automation and Control

Tight and efficiently designed Db2 applications can be undermined by inefficient SQL statements. Hidden and hard to find SQL inefficiencies can hamper productivity and dramatically affect costs. If they remain hidden, the ongoing performance drain can drastically impact performance and worse: the performance of future workloads. The problem can be far more serious in outsourced Db2 development and support environments, where control of development and test processes may not be very effective (or visible).

Exacerbating the problem, SQL continues to become more and more complex, as each new version of Db2 introduces more functionality and, therefore, complexity. And with the growth and now dominance of dynamic SQL, programmers often have to deal with mixed dynamic and static SQL environments.

What solutions are out there? There is some thought that monitoring can solve SQL quality problems, but this is true only for new problems that arise within production systems. For those SQL bottlenecks that have existed since day one, monitoring will not help.

The better solution is an automated quality control of SQL that can be applied in development, test, and production environments. Certainly, you want to be able to solve SQL quality issues as early as possible—on the developer’s desktop if possible. But you also need a way to validate the SQL that’s running right now on your production systems. This type of solution would also mitigate the quality risks involved in Db2 development or maintenance outsourcing.

5. Smart Capping

Tell anyone in a large mainframe datacenter about capping and their eyes will glass over, and they will stop listening to you completely. That’s because capping is one big no-no when it comes to running business-critical transaction processing—it’s a bad word. The last thing you want getting in the way of your critical processing is performance capping. Slow down your processing—on purpose? No thanks, and there’s the door; don’t let it hit you on the way out.

The purpose of capping in the first place was to help control costs in the mainframe datacenter. Unfortunately, it’s a misunderstood concept. The right idea is to cap low-priority workloads, leaving high-priority workloads to complete without capping. However, the details are sometimes difficult to grasp, and to be fair, it’s a challenging feature to run effectively. It can also be risky to manually make changes to system capacity settings as erroneous performance capping of critical workloads is something that occasionally happens, with disastrous results.

Fortunately, there are a small number of third-party vendors specializing in this specific area. They automate system capacity settings is such a way that critical workload capping is not a risk, and system resources can be shared between LPARs, giving more resources to critical workloads when needed.

6. Smart Data Replication

Most of the large banks and insurance companies in the US and around the world have multi-platform datacenters that are a mix of mainframe and mid-range servers. Different user groups use different platforms, but in some cases, they require the same data. For example, a transaction processing application on a mainframe system may require some specific customer data to complete a transaction—it gets that data from the Db2 database attached to the z13 mainframe system in the Dallas datacenter. Meanwhile, a customer service agent may require that same data to answer questions put forth by that same customer during a support call. The agent gets that data from the customer service network server running Windows NT in the Kansas City datacenter.

In some instances, that customer service data is out of date by minutes, hours, or even days because of the data replication techniques used to share data across the datacenter (and datacenters) for all users to access. The reason for replicating data is simple: IT organizations don’t want customer service agents directly accessing mainframe data because of the risk and extra load represented by that type of access, which is understandable. So, data replication is a requirement, and there are many ways to do that. One popular technique is ETL (Extract, Transform and Load), basically copying all of the data from one database to another. A serious problem with ETL is that it’s a time-consuming and resource-consuming process as it consumes large amounts of server resources, as well as large amounts of network resources. And the result is typically data that’s up to a day old, in some cases, it’s older.

ETL is an important part of a datacenter’s emergency backup plan, and isn’t going away anytime soon. However, as a daily replication tool, it’s left wanting. A better technique is to use a fast replication technique that employs a speed-optimized CDC (Changed Data Capture). And using one tool that does it all is preferable: one that is multi-platform, multi-OS, and multi-database format capable. Using this technique ensures that replicated data is accurate at all times as it’s virtually a real-time solution. That mainframe application and the customer service agent will be using the same data.

Six ways to do the impossible

Increases in business translate into increased transaction processing, and increased transaction processing costs—this in conjunction with rising costs all around such as personnel, power consumption, and so on—leave CIOs and managers looking for ways to control costs anywhere and everywhere.

For the most part, companies will continue to rely on the “tried-and-true” solutions, some of which make sense, although limited in vision. And most of these are alarmingly risky, costly, or both. Cases exist where these types of solutions will make sense, but there are just as many where unnecessary risks are being made, and where money is being left on the table. These six smart solutions include some of the most effective techniques that IT leaders can follow to help control rising costs, all while improving system performance. They are mainframe IT best-practices that have been used and are being used today by some of the biggest and most successful companies on the planet.

In addition to the planning, development and management of the Planet Mainframe blog, Keith is a marketing copywriting consultant where he provides messaging for corporate and partner products and solutions.

One thought on “Part II: Six ways to improve datacenter performance while saving on costs”
  1. Keith, I strongly agree with your #3, Smart Mobile Development (I don’t disagree with the other points, but they’re not my area of knowledge). Legacies are valuable, it makes sense to leverage rather than redevelop them in another technology that won’t be any simpler. This is why a strong focus of our software https://www.jazzsoftware.co.nz is to make it easy to unlock one’s mainframe data with web services. When the current development phase completes our next project will probably be to extend our Jazz to client services, allowing both enterprise and Unix/Windows/Mobile services to be developed really easily from a partly-common code base.

Leave a Reply

Your email address will not be published. Required fields are marked *