Mainframe AI Agents: The Interns Who Never Sleep

Nov 19, 2025

Penney Berryman, MPH (she/her), is the Content Editor for Planet Mainframe. She also writes health technology and online education marketing materials. Penney is based in Austin, Texas. Connect with her on LinkedIn.

A conversation with BMC Anthony Doro
and Leah Sokalov.

A Knowledge Problem That Can’t Wait 

Amanda Hendly, Managing Editor of Planet Mainframe, met with Distinguished Engineer Anthony Doro and Senior Product Manager Leah Sokalov, both from BMC. The conversation focused on AI; it’s not just becoming smarter, but also becoming a genuine working partner for mainframe teams. 

Leah identified an issue that every mainframe organization faces: too much essential knowledge is stored in too few places. That has always been true, but the urgency has intensified. Fewer new engineers enter the field, experts retire, and systems grow more complex.

However, AI assistants trained on real institutional history dramatically simplify knowledge transfer and access. New hires can start faster. Experts carry less mental weight. Outages resolve more efficiently. And because AI agents learn from the same material, they become smarter and more helpful over time.

“It doesn’t matter if you’re twenty years in or twenty days in, [with AI assistants], you get the same access to the same deep knowledge.” —Leah

The future of mainframe work is faster, clearer, and far less dependent on who happens to be available on any given day.

AI That Understands the Mainframe

Anthony underscored the technical reality: general-purpose AI models can’t solve mainframe problems on their own.

“Language models alone are really not enough. The [LLMs] have to become an extension of your enterprise environment.” —Anthony

AI only becomes useful when it connects directly to workflows, documentation, systems, and real-time data. Once integrated, an LLM starts acting like someone who actually understands how your shop works.

Ahead of the Tasks 

The most exciting shift, according to both speakers, is the transition from reactive tools to proactive AI agents. Today’s chatbot-style experiences rely on people asking questions.

However, proactive AI agents work the opposite way. They monitor the environment independently, notice problems early, interpret what’s happening, and guide the next steps toward a solution without waiting to be prompted.

Anthony described these agents as “software entities that observe their environment, make decisions, and take actions.” It’s a quiet evolution that changes an engineer’s day from constant monitoring to actual problem-solving.

Security Still Rules Everything

Giving AI the ability to act independently raises understandable concerns. Anthony and Leah addressed the trepidation they often see in clients. They pointed out that AI agents follow the same guardrails as any sensitive enterprise service, with strict permissions, least-privilege access, and zero-trust principles.

Leah offered this analogy:

“AI agents are like interns. You don’t give them everything on day one. You train them, you watch them, and you expand access only when they’ve earned it.” —Leah

This slow, supervised approach keeps systems safe while still allowing AI to take on meaningful tasks.

Equalizing Access to Institutional Knowledge 

Looking ahead, Lisa and Tony envision a future where all the scattered mainframe iterations and knowledge—old tickets, outage notes, fixes, and formulas—become organized and equally accessible. That’s when the entire team benefits.

This is more than modernization. It’s a shift toward real resilience.

A Future That’s Already Taking Shape

Anthony and Leah ended the conversation on the same note: optimism. The tools exist. The infrastructure exists. And AI is ready to take on work that once required deep, hard-earned experience over decades.

Anthony put it best:

 “It’s going to open up endless opportunities for all of us to do some truly creative things in this space.” —Anthony

AI isn’t here to replace mainframe professionals. It’s here to clear cognitive clutter, allowing people to focus on the meaningful work that keeps critical systems running.

For the first time in perhaps a long time, the future of mainframe operations looks … easier.

Transcription - Anthony Doro and Leah Sokalov Interview

Amanda Hendley, Planet Mainframe:
I’m here today with Anthony Doro, a distinguished engineer with over three decades of experience and a proven track record of shaping the future of enterprise technology. I’m also joined by Leah Sokalov, a senior manager in product management at BMC, leading the generative AI initiative across the AMI portfolio.

Amanda Hendley, Planet Mainframe:
Leah, building on your talk: for large mainframe organizations, the challenge of scattered institutional knowledge isn’t new. But how has the widening mainframe skills gap made addressing this knowledge expert crisis more urgent now than ever before?

Leah Sokalov:
Yeah, so the problem of scattered knowledge across the mainframe is not new. But lately, it’s become more urgent because of the widening skills gap and the shortage of new people coming into mainframes. The complexity has also grown tremendously in recent years. So even though the problem isn’t new, it’s more urgent than ever.

We’re addressing that by having AI assistants—these knowledge experts—come into play and help leaders and organizations work through this challenge.

Amanda Hendley, Planet Mainframe:
Anthony, we often hear about generative AI’s potential but also about its limitations for mainframe teams. What are the non-negotiable requirements to make GenAI truly useful for real operations? If we want to move beyond generic answers and deliver expert-level intelligence, how do specific language models play a crucial role?

Anthony Doro:
Yeah, that’s a really good question. At this point, I think we all realize language models alone are not enough for enterprise solutions. We have to connect the language models to your workflows and your processes. The LLM basically has to become an extension of your enterprise environment.

In addition, harnessing all your enterprise knowledge and augmenting it with the LLM is critical. Everything you document—playbooks, processes, workflows—must be ingested and processed and live side by side with the LLM.

We also have to connect the LLM to your systems. It needs real-time data to bring real substance to enterprise generative AI. That’s what takes us beyond generic answers. It becomes infused into your enterprise processes.

Amanda Hendley, Planet Mainframe:
So shifting a bit from generative AI and how AI is helping us—let’s talk about how AI can start doing things for us. Can you talk about AI agents and agentic workflows and what that means for mainframe operations?

Anthony Doro:
Sure. Absolutely. If we look at where we are today with a lot of solutions, we’re still in reactive-based models. You have a chat interface, the user sits there, types questions or prompts, and gets a response. They’re reacting to some situation—maybe responding to an email, or dealing with an alert in the AIOps space. You process the information yourself, then ask the AI for help or guidance. That’s reactive.

With AI agents, we’re shifting to a proactive model. These agents work 24×7 on defined use cases. They stay on top of situations, detect problems early, give you awareness, and notify you about the situation or next steps you should take.

If we take a step back, agents are software entities that can observe their environment, make decisions, and take actions. That’s the shift—to a proactive model with agents.

Leah  Sokalov:
Yeah, and that’s the true promise of agents and AI—helping accelerate productivity and offload the cognitive load we put on our mainframe practitioners and experts.

Anthony Doro:
Right. These agents connect to the enterprise knowledge we’re collecting. They connect to other systems. They connect to real-time data. They harness all of that information to do the work they need to do.

There are two types of workflows we hear about: prescriptive workflows—where AI engineers map out exactly what agents must do—and true agentic workflows. In agentic workflows, the agents work together, collaborate, and decide the best next steps as they move forward.

Amanda Hendley, Planet Mainframe:
For agents to operate in a high-stakes mainframe environment, we’re putting a lot of trust in their ability to make decisions. That’s going to raise security concerns. Can you talk about the considerations for security?

Anthony Doro:
Yeah. There’s a lot of hype around agents today, and that’s good—we’re promoting them in the direction we want. But at the end of the day, these agents are services with artificial intelligence built into them. As an architect, I look at them like any other service we create.

It comes down to a zero-trust architecture, least-privilege access, role-based access—everything you would apply when mapping out any system. We follow those same patterns with AI. And then we add extra AI-related guardrails around the entire agent to make sure it’s safe and secure.

Leah  Sokalov:
Yeah. I like to think of AI agents as interns. When you onboard a new person, you don’t give them everything on day one. You slowly gain trust. You train them. You make sure what they’re doing complies with your guardrails and your security measures. It’s the same here.

Anthony Doro:
Exactly. We look at it as micro-access. You give agents access only to what they need to do—nothing more, nothing less. And you can isolate and guardrail the entire experience.

Amanda Hendley, Planet Mainframe:
Leah, looking forward, how do you see conversational AI knowledge experts and agents not just modernizing the mainframe, but reshaping the future of mainframe teams—maybe even improving resilience and turning legacy knowledge into a strategic advantage?

Leah  Sokalov:
We talked about the knowledge gap and skill shortage. And we already have AI assistants that are transforming the way we interact with systems—z/OS, CICS, IMS, whatever you have. These interactions are becoming easier.

But looking forward, imagine having all of your outages, tickets, incidents, and team knowledge codified and accessible. It doesn’t matter if someone has twenty years of experience or twenty days. Everyone gets access to the same vast knowledge.

That’s true resilience. Knowledge—like security—is one of the main assets we have for mainframe resilience moving forward. And it’s exciting.

Anthony Doro:
Yeah. It’s moving all of us beyond the reactive chat-based experiences we have today into truly agentic models. That’s going to open endless opportunities to do really creative things in this space.

Leah  Sokalov:
The technology is there. The tools are there. The platforms are there. We just need to keep up and see how we blend everything for the benefit of our organizations to boost productivity and serve as actual assistants to what we do in the mainframe domain. A lot of innovation is coming. It’s exciting.

Amanda Hendley, Planet Mainframe:
Well, thank you both for joining me today.

Anthony Doro:
Thank you for having us.

Leah  Sokalov:
Thank you.

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *

Sign up to receive the latest mainframe information

This field is for validation purposes and should be left unchanged.

Read More

Modernizing Mainframe Security: RACF and MFA Misconceptions

Modernizing Mainframe Security: RACF and MFA Misconceptions

Passwords Aren’t Enough to Modernize Mainframe Security Security is the backbone of any organization’s control environment. In today’s hyper-connected economy, protecting against IT threats is not just a technical requirement—it’s a business imperative. A single...

TRIVIA – Penetration Testing

TRIVIA – Penetration Testing

Mainframes have long been known for their unmatched reliability and security—but even the most trusted systems can hide surprises beneath the surface. As technology evolves, so do the tactics of those trying to exploit it. Penetration testing (pen testing), or...