Regular Planet Mainframe Blog Contributor A popular speaker, blogger, and writer, Trevor is CEO of iTech-Ed Ltd. He has an extensive 40-year background in mainframes and IT, and has been recognized as an IBM Champion from 2009–2024 for his leadership and contributions to the Information Management community.

When Beliefs Shape Machines

Imagine two people talking in a bar—one believes in God, and the other doesn’t. One swears by Apple, the other by Android. One supports Trump, the other can’t stand him. It doesn’t matter which side you’re on; the point is that people disagree.

Those disagreements matter more than you might think, especially if those people train AI systems. Their beliefs, biases, and values can shape the way AI “thinks” and responds. And those people could just as easily be mainframe users.

The Human Fingerprint Behind AI

Google’s own AI defines ethics as “a system of moral principles and values that guide human conduct.” But AI doesn’t generate its ethics—it inherits them. Whoever trains the model leaves an imprint on what it considers right or wrong.

What if someone with the mindset of Adolf Hitler—or even the Founding Fathers—trained your AI? The ethics of those eras included slavery, sexism, and rigid social hierarchies.

Now imagine those values, intentionally or not, built into the software running inside your enterprise systems.

When Questionable Code Runs on Your Mainframe

Today, nearly every tool ships with “AI capabilities.” But what happens when one of those AIs, trained with unseen bias, gets installed on your mainframe?

Suppose it’s a security tool. It flags suspicious logins, changes access levels, or deletes files. You assume it shares your ethics about what’s right or wrong—but what if it doesn’t?

Picture this: a system programmer logs in from another country at 2 a.m., making rapid configuration changes. Your AI spots the activity and suspends it—good. But what if its ethical logic says certain regions or actions are “safe” when they’re not? That kind of bias could expose your mainframe to real damage.

The Hidden Values in Open-Source AI

Many enterprises assume open-source AI is neutral or community-governed. But how do you verify that its training data aligns with your organization’s ethics? What hidden assumptions shape its responses and decisions?

Google’s AI lists the usual aspirational goals: avoid bias, ensure transparency, protect privacy, maintain accountability. Those sound noble—but who enforces them? And how do you prove your AI actually does those things during development and deployment?

The Real-World Risks of Machine Morality

Pop culture loves warning us about this—think The Terminator. But the real risk is subtler: an AI that decides certain “friends” can access your mainframe while blocking others. Or one that allows a ransomware attack because it “trusts” the wrong source.

Once ethics are embedded in software, consequences extend far beyond theory.

Every organization using AI must ask: who’s responsible for ensuring those AIs share our ethical standards? Who checks that our mainframe security tools reflect human judgment—not hidden bias?

Casual debates over a beer might fade away. But, when AI values collide with organizational ethics, the impact could be permanent.

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *

Sign up to receive the latest mainframe information

This field is for validation purposes and should be left unchanged.

Read More

Inventor, Mother, Creator: Grace Hopper

Inventor, Mother, Creator: Grace Hopper

Grace Hopper is often touted as the inventor of COBOL, and for good reason. In 1955, she created the definition of a data-processing compiler and many of the rules that still govern COBOL today. She also created the first working data-processing compiler, FLOW-MATIC....