When talking about Mainframe Modernization, we usually discuss DevOps, Zowe, APIs, Java, Python, etc. But how often does anyone discuss how your security posture needs to be modernized?  Do terms like ‘least privilege’ and ‘zero trust’ mean anything to your mainframe team?  

Many of these systems were built when the options were just batch, IMS, and CICS; there were no internet or external threats. Now, traffic is coming in from anywhere and everywhere, whether via APIs, distributed Db2, or other means. You need to make sure you are prepared to monitor that traffic to prevent Distributed Denial of Services (DDos) attacks and ensure nobody is getting your most sensitive data.

The best cybersecurity plan involves multiple levels of detection and prevention.

Since October is Cybersecurity Awareness Month, I just wanted to put a plug out there for Db2 people. Everyone knows that the best cybersecurity plan involves multiple levels of detection and prevention.

Data Access Monitor for Your Mainframe

On the mainframe, most people always think that RACF/ACF/Top Secret is the answer to everything security-related on the platform, but remember that a Data Access Monitor (DAM) can provide a great deal of additional protection. It is one more step to take the mainframe from the most securable platform to the most secure platform, since the data on the platform is the gold that hackers are looking for.

I’ll share a couple of examples of the benefits I’ve realized by using a DAM with several employers. Some are rather obvious, but others were bonuses that IBM didn’t really talk about as benefits. 

While I know there are several solutions available in the industry, my employer used the IBM Guardium product at the time. (I assume that other products probably provide very similar functionality.) We did use it for IMS and flat files, too, but I am just going to focus on Db2 for this discussion.

You Need a Data Governance Strategy

First, everyone must understand you don’t just install a DAM and magically have all the answers. To prepare for the implementation, you need a solid overall Data Governance Strategy. This involves creating an inventory, establishing ownership, and classifying.

The process ensures you know the most critical to monitor and protect.  It is not reasonable to monitor everything in a large environment; you really need to focus on the most important data. Government regulations often help with that. The government is always so helpful!

Data Owner Audits and Approvals

In our environment, we had Data Owners who were getting hit with audit requirements that they had no way of satisfying. A common one was to “identify all accesses to your sensitive data by individuals outside of an application.” 

Sure, we could identify who had access—that was another report. But we needed to know who was actually using that access, what they were accessing, and whether anyone like a Db2 SYSADM (the team I was on) was accessing the data. 

A DAM makes those types of reports easy, so each application area does not have to try to create its own. Now that application IDs are coming in from so many different locations, we could also verify that an ID wasn’t coming from an unexpected IP address.

We were also able to monitor any GRANTs and REVOKEs, so as access changed, we could verify that they had the proper approvals by the Data Owner. Since there was no history in the Db2 AUTH tables at that point, we could also generate a history for auditors.  I mention ‘at that point’ because we recently got some great news that I’ve been hoping to see for many years:

Temporal support for security-related catalog tables
Db2 V13 Function level 505 provides the ability to access information about the authorizations and privileges that were in place for Db2 objects and users at a point in time in the past. This information can be a useful source of evidence for security auditing purposes. Temporal support is implemented through the use of history tables that are associated with their corresponding catalog tables.

APAR PH59531 delivered the functional code for temporal support for security-related catalog tables.

Manage the Subsystem

Now, for the fun things that benefit my team: As a Db2 SYSADM, it was our job to manage the subsystem, and Guardium helped with that, too. We had our own audit requirements to satisfy, so it was critical to be able to monitor GRANTs to ensure nobody was given elevated privileges within Db2. 

We also watched for any privileges granted to the public; that was a big red flag. One of my favorites, though, was the ability to selectively monitor negative SQL codes, such as –55x errors. The Data Owners were getting reports on who was accessing their data, but I wanted to know what IDs were trying to access sensitive data and failing, so we turned this feature on.

Immediately, we had multiple situations where an ID was reported that just appeared to be poking around trying to find what they could and could not access.

Immediately we had multiple situations where an ID was reported that just appeared to be poking around trying to find what they could and could not access. Not good. I reported to our Security team, and they “consulted” with the Manager for those people and explained the concept of Data Security. 

Having this data is an excellent way to identify and address suspicious behavior, especially when correlated with other data.  And don’t forget about incident response; it does little good to report something that nobody reacts to.

Mainframe Availability

On the mainframe, we also have a passion for availability. In an ideal world, every application would have robust error processing, but we know that’s not reality. We had situations where errors were happening, but the applications didn’t even know.

We needed an alert to notify us quickly so we could prevent or at least quickly resolve outages.

We did also detect those using our SQL Monitor tooling, but when you have 40+ production subsystems, watching them was not a feasible option. We needed an alert to notify us quickly so we could prevent or at least quickly resolve outages.  Here are some examples of how we used this feature:

805—You would think applications would know when a package wasn’t found. However, when we turned that on, we found jobs that had been running for years –and were getting this error every time they ran– but the application area did not know. So, the first obvious question is, ‘Is that job really needed?’ Then, we could help resolve the issue. We found that typically, the package was bound to a different collection than the plan had access to.

Again, in an ideal world, the application area would alert us when it sees these errors. We didn’t know if their error processing was capturing the long messages critical for diagnosing most 904 errors. By the time a ticket was open and went through all the levels of support before it reached us, it could have been hours.

I wanted to know before the application even knew. With Guardium, we could capture all the information and generate an alert, so our second-level team would know almost immediately and, in some cases, resolve the issue before the application area even knew what was happening.

922—Was a new program implemented, and was the plan authorization not granted to the right ID?

The moral of the story is if you are paying for a vendor tool, you might as well maximize the value of that tool, even if that wasn’t the intended purpose.

Most companies have a SIEM (Security Information and Event Manager) that collects data from multiple platforms and can correlate that data and detect anomalies. That was one of the biggest values; we could easily send our data to our existing SIEM infrastructure so we could use the processes already in place. All we had to do was work with that team to determine our reporting and alerting requirements. 

Moral of the story: if you are paying for a vendor tool, you might as well maximize the value of that tool.

This is one more example of how the mainframe does not need to be different; it can just be another endpoint. We could send the monitoring that I described above, and the SIEM would allow our CyberSecurity Center to correlate our observations with other platforms, identify IDs that were doing things they probably should not be doing, and, most importantly, take action!

So, how important is your Db2 data?

Leave a Reply

Your email address will not be published. Required fields are marked *