In 2016, the least expensive city in the U.S. for annual data center operating costs is Sioux Falls, South Dakota, at $9,684,282 in annual operating costs[1]. In Canada, where the lowest risk associated with data centers is second only to the U.S., the costs of energy and other data center elements can be even less[2]. Nevertheless, millions of dollars annually to operate a data center is still a lot of money.

Because of this, companies have been chipping away at data center costs—primarily by seeking out easy ways to reduce the carbon and physical footprints of servers and storage—and they’ve done phenomenally well over the past ten years. There are large enterprises that have reduced their energy expenses alone by over a million dollars per year, and they have largely done it by eliminating rows of physical servers and by virtualizing operating systems and storage.

The problem is that most data center managers stopped there.

Over a decade since the data center virtualization movement began, system software (if you exclude the operating systems that have been virtualized), remains as one of the most expensive assets in the data center. Simply stated, systems software aside from operating systems is relatively unexploited where virtualization is concerned. Instead, sites continue to pay exorbitant licensing and maintenance fees—and the fees are only getting higher as demands for new application development and testing environments are rising with the explosion of mobile and web-based applications.

“Every organization I talk with today says that they are challenged to keep up with their applications backlog,” said Dave Evans, founder and CEO of Standardware, which provides IMS database virtualization tools for the IBM z Systems mainframe. “These applications are primarily in the areas of mobile and web-facing apps—but for these applications to work, they have to connect with backend mission-critical data that is stored in mainframe data repositories.”

Mainframe computing, in particular the IBM z Systems mainframe, continues to be a critical part of corporate and government data centers. One reason is the volume of legacy applications that continue to run—and run well—in the mainframe environment. A second reason is the resiliency and sophistication of the mainframe, which at some sites has been running for thirty or forty years without a failure. A third reason has been the systematic reinvention of the mainframe, which as today’s IBM z System can run Linux and Windows, as well as its standard operating system.

“Companies recognize this,” said Evans. “This is exactly why they call us when they see large application backlogs, all of which require expensive mainframe systems like the IBM z Systems IMS database.”

What Evan’s company does is provide a virtualization tool that enables sites to produce multiple virtual instances of a mainframe IMS database for application development and testing—which at the same time avoids or delays costly hardware and software upgrades. “What sites are finding is ways to virtualize and economize some of their most expensive system assets,” said Evans. “For example, by using these products to virtualize and replace multiple instances of IMS for application development and testing, organizations can save on CAPEX (capital expenses) in the data center.”

Just as significantly, workflow automation enables sites to get new applications to market sooner—and to reduce reliance on expensive, high demand personnel in data administration.

Evans explained how database virtualization tools save time.

“In a standard mainframe IMS database environment, IMS database resources must be defined and reserved for every single action that requires database access, whether the accessing application is a web-facing, mobile or internal program. Additionally, IMS resources must be defined for each individual application to ensure that the application has IMS data space reserved for it so it can access data as it is being developed, tested and trained on. Virtualization technology transforms this paradigm by generating software-based representations of physical IMS data resources. In turn, this enables database administrators and systems programmers to spin up new development and test environments without having to modify application program names and libraries.”

How important is this?

Globally, almost every major company in every industry sector runs a z Systems IBM mainframe because of the reliability and the performance of the platform[3]—and nearly 40 percent of organizations in a 2013 Arcati research survey reported that mainframe resources were being accessed by their web-facing and mobile applications[4].

“Mainframe resources are critical in the companies that I visit, and in nearly every one of these companies, there are application backlogs and software developers demanding their own development environments,” said Evans.

This situation of high demand isn’t likely to end soon—making it all the more important for data center managers to step up their virtualization efforts to new levels that include mission-critical software like databases, as well as servers and storage.


[1] https://searchdatacenter.techtarget.com/news/1204203/Data-center-locations-ranked-by-operating-cost
[2] www.cio.com/article/2391505/energy-efficiency/why-putting-your-data-center-in-canada-makes-sense.html
[3] https://mainframes.wikidot.com
[4] www.arcati.com/13part2.pdf

Mary E. Shacklett is President of Transworld Data, a technology analytics, market research and consulting firm. Mary is a noted technology analyst and commentator who is listed in Who’s Who Worldwide and in Who’s Who in the Computer Industry. She is a keynote speaker, and has over 1,000 articles, research studies and technology publications in print. Mary may be reached at mshacklett@twdtransworld.com.

Leave a Reply

Your email address will not be published. Required fields are marked *