IT and business management are increasingly concerned with the rising costs associated with their highly complex and ever-growing mainframe and distributed systems data centers. They are also concerned about controlling outages and mitigating the lack of transparency that they have into cost drivers—including in their outsourced environments.
Wide View Needed
CIOs, IT managers and line-of-business managers don’t have unhindered visibility into their IT data, nor do they have the transparency needed to quickly assess the current situation or to decide on the best strategy going forward. And that’s a big problem.
For years, IT managers have relied on the monitoring tools developed by IBM and mainframe ISVs, which provided an excellent view of the state of their mainframe systems. These tools were used and supported by large teams of mainframe support techs who employed them every day.
Monitoring tools have changed tremendously over the years—some are driven by analytics engines while many are updated green-screen reporting tools. For the most part though, tools are still installed on monitored systems, configured and maintained by technical staff and can be a challenge to learn, interpret and manage.
Complicating things, the landscape is very different today: the mainframe is typically siloed in either an official or unofficial bimodal IT environment, where resources beyond running and maintenance are severely limited. The thinking is that mainframe costs—including support personnel—must be contained.
So, while today’s monitoring tools can provide reams of data, the expertise may not be readily available to run the tools and interpret their results and relay them to business managers. Worse, the data IT gets from monitoring tools is sometimes raw. It may not be able to help them with their biggest concerns: IT costs and transparency into their own infrastructure.
Moving Forward
Gartner says that “IT Operations Analytics (ITOA) technologies are primarily used to discover complex patterns in high volumes of often “noisy” IT system availability and performance data, providing a real inference capability not generally found in traditional tools.” The operational data explosion – think IT Big Data – has sparked a sudden and significant increase in demand for ITOA systems. Gartner predicts that by 2018, 25% of the Global 2000 will embrace ITOA, up from about 2% just two years ago.
IT must make its big data readily available to take full advantage of today’s ITOA tools. This isn’t a problem, because mainframe systems have over 100,000 points of measurement and networked servers more than 2,000 each, and IT organizations are already collecting this information. ITOA tools will access this information, in some cases add to it, and run advanced algorithms against it. They present this information using built-in business intelligence reporting capabilities.
The best ITOA tools deliver information via the cloud, meaning they stay off monitored systems and have no detrimental performance impact on IT systems. Not only do they deliver improved information in more easily viewed and understood ways, but often they provide explanations for less-experienced personnel and suggest possible solutions. These tools deliver on how to best save on costs, prevent outages and, ultimately, help to achieve optimum IT operational efficiency.
New Solutions
After listening to customers, and realizing the need, IBM and a handful of mainframe and distributed systems ISVs are answering the bell, and are now providing some new solutions: analytics for IT managers and business managers.
These new ITOA tools process IT data using smart algorithms—sometimes combining technical and business data—to provide new transparency and control for IT and business managers over their own IT infrastructures. Costs are contained by increasing productivity through data-driven insight, process automation and the seamless expansion of resources when and where needed.
Mainframe-specific ITOA tools help managers of transaction processing systems to understand the operational efficiency of their processing environments. Operational data is securely transferred to cloud-based repositories—so there is no impact on measured systems—where analytics engines can resolve patterns to determine immediate and potential problem areas, and identify improvement possibilities.
This insight information is automatically generated and presented via modern web or mobile GUI to personnel for immediate cost-saving and/or performance improvement action. Examples include isolating areas of high millions of service unites and CPU usage for implementing CICS thread-safety, workload transfers to zIIP processors, improved buffer management or even in-memory technology implementations.
Multi-platform ITOA tools provide improved transparency, which help operational staff and DevOps teams to better understand cost drivers and identify areas for mainframe optimization and server rightsizing potential. Increased transparency can make cost savings a much easier job in both the corporate and outsourced data center environments. Think of savings that can be made on over-provisioned servers in a 20,000 server data center, or on moving low-priority mainframe workloads away from high-priority workload processing periods.
Modern ITOA can also make a difference in corporate departmental charge-back accounting, obliging departments within a company to be more accountable for their IT consumption. This can change the perception of IT from a budgetary black hole into a window on business efficiency.
Stay Competitive
Today’s new ITOA tools are ushering in a new era in both IT and business management. Data center costs will be contained by increasing IT productivity through data-driven insight and process automation, minimizing risk while maximizing existing data center infrastructure.
IT organizations will be able to forecast system issues, and address them before they cause problems. The new tools will deliver information that’s useful to all stakeholders, enabling business and IT department managers to make informed decisions quickly. Most importantly, modern ITOA will deliver competitive advantage to the business through IT operational excellence.
Originally Published in IBM Systems Magazine.
Regular Planet Mainframe Blog Contributor
Allan Zander is the CEO of DataKinetics – the global leader in Data Performance and Optimization. As a “Friend of the Mainframe”, Allan’s experience addressing both the technical and business needs of Global Fortune 500 customers has provided him with great insight into the industry’s opportunities and challenges – making him a sought-after writer and speaker on the topic of databases and mainframes.