Limits to Generative AI and the Mainframe

By using AI with the mainframe, organizations can potentially uncover insights, optimize operations, and stay ahead of competitors. A study by Accenture reported that AI can boost profitability by an average of 38 percent. 

Approximately 25% of the 2024 Arcati Mainframe Survey participants reported developing or deploying an AI/machine learning model on their mainframes. Additionally, 39% indicated that AI/machine learning is currently a subject of discussion within their organizations.

However, companies must consider the risks while balancing possible rewards. Before embracing generative AI, consider limitations that, in casino parlance, ‘favor the house.’

Limitation 1: You can choose AI depth or specificity–not both

Companies can choose closed/private AI, or open-source/ public AI models. Most open-source platforms offer models pre-trained on vast sets of publicly available data – more than any human can comprehend. Public AI models are readily available at a lower cost to incentivize high adoption.

In contrast, a handful of AI labs developed many of the leading generative AI models, and most release their models as closed-sourced products to retain a competitive advantage. Private AI allows for highly focused training models and enhanced data security.

Table 1: Comparison of open-source AI and closed-source AI

Open source/Public AI
Closed source/Private AI
AvailabilityHigherLower
Investment costLowerHigher
PerformanceLowerHigher
Intuitive interfacesLowerHigher
Data securityLowerHigher
Option to deploy locally YesNo
Adjustable AI model NoYes
Pre-trained YesNo
Insight to data, architecture, and codeYesNo
Scalable for experiments in the cloudYesPossible, but cost-prohibitive

According to Harvard Business Review, there’s also a third option. Companies can adopt a hybrid open-source AI model in which their data is kept private, but the AI model’s code, training algorithms, and architecture are publicly available. This is a trend to watch.

Limitation 2: AI data governance and privacy rules are opaque at best

Many legal and ethical questions remain about AI ownership, privacy, and liability. Several countries are working on AI privacy laws, but they vary in scope and enforceability. 

Before buying an open-source model or creating a private AI, consider: 

  • What are the data security protocols? Do they include access limitations for legitimate purposes only?
  • Should a company disclose its data sources, permissions, steps to secure and deidentify the data, and compliance with regulatory requirements?
  • When and how should a company disclose its use of generative AI? An IBM blog explicitly states, “No AI bots were used to write this content.”
  • Should a company have a position on generative AI or its use in their products? For example, Broadcom will not leverage publicly hosted models to generate code embedded in their products. Apple, Samsung, and the BBC banned open-source Chat-GPT use in their organizations, citing privacy and compliance issues. 

AI-generated content, including AI-written code, cannot be copyrighted in the US.”

Limitation 3: AI training data can be incomplete and biased

The adage ‘Trash in, trash out’ holds in generative AI too. Multiple examples support this, with varying ramifications. Perhaps we shouldn’t be surprised. When computer scientists at Google built a neural network of 16,000 computer processors with one billion connections and let it browse YouTube, what was the first thing it learned? How to recognize cats

Text generators can also produce biased content; the bigger the language model, the greater the bias. This bias stems from the volume and patterns in the training data.

A team examined how GPT-3 generates text about religions. They found that GPT-3 mentioned violence once each for Jews, Buddhists, and Sikhs, twice for Christians, but nine out of 10 times for Muslims. However, injecting positive text about Muslims into a LLM reduced the number of violence mentions about Muslims by nearly 40 percentage points.

Amazon trained a recruitment AI to find applicants based on resumes received over the previous 10 years. Since there are more males in the tech industry, the AI “learned” that males were a preference in hiring. As a result, resumes that included “women’s” as in “women’s chess club captain,” were penalized.

Limitation 4: Generated content may be flat-out wrong

The New York Times highlights one vexing challenge, “The technology behind generative AI tools isn’t designed to differentiate between what’s true and what’s not true. The goal is to generate plausible content based on patterns, not to verify its truth.”

AI hallucination” occurs when an AI LLM generates false, inaccurate, or illogical information. For example, the prompt “How many eyes does the sun have?” should not produce a numerical answer, yet it has. Similarly, Chat-CPT has generated legal documents with compelling references to nonexistent court cases. It’s created fiction to fill in history gaps.

And the really uncomfortable part about AI hallucinations? Even AI experts cannot dismantle why an algorithm generates a particular sequence of text or why the response may change. Hallucinations remain in the foreseeable future of AI.

Limitation 5: AI data is vulnerable

A bad actor can deliberately enter prompts to confuse AI models and corrupt training data, even in a closed model. These are called adversarial attacks and can result in:

  • Malicious code snippets that reduce security and increase system vulnerabilities.
  • Misleading or incorrect code suggestions that lead developers down the wrong path during development. Don’t mess with The Zone!
  • Low-quality or non-functional code that degrades software performance or reliability. Poor performance means unhappy customers.  

Limitation 6: COBOL restrictions are real

Translation programs have been available for decades, yet the mainframe remains COBOL-based. So why is IBM® watsonx™ translating COBOL to Java? Why are universities teaching COBOL again? Because it’s easier to work with a standardized code base.

Open Mainframe Project’s generative AI COBOL coding capabilities are available, but lacking. As Venkat Balabhadrapatruni, a distinguished engineer at Broadcom Software, stated in a webinar, “The available open-source COBOL code must be prepared and progressed. It’s not ready for beginners or daily use.” 

Is there another transaction-handling code base that could do COBOL’s job? Maybe. Could generative AI translate all code formats into a new, standard format? Perhaps. But for now, communicating across codes is still laborious and disjointed, limiting AI mainframe applications.

Generative AI is a Gamble

Generative AI is alluring. It can save mainframe companies time, money, and reskilling while advancing competitiveness and modernization. 

Yet the functionality and benefits of AI depend heavily on the machine learning data an AI model is trained on and businesses’ risk tolerance. 

Tom Taulli, author of Generative AI: How ChatGPT and Other AI Tools Will Revolutionize Business, acknowledges a deeply rooted cultural aversion to change: “Mainframe companies want to innovate, but they also hold a deep sense of responsibility to their customers and understand the risk of destabilizing customer relations.” 
AI may be limitless, but many of its risks are quantifiable. In gambling, understanding the odds enables a participant to make informed choices. The same is true for AI. Whether companies take a measured gamble and potentially gain a business advantage remains to be seen. 

Looking for more about generative AI and mainframes? Catch 8 beneficial use cases.

Penney Berryman, MPH (she/her), is a digital marketing storyteller at the intersection of culture and technology. Owner of Copper Sage Consulting, Penney blends creativity and results-driven expertise to craft captivating narratives.

Connect with her on LinkedIn.

One thought on “It’s a Gamble: Limits to Generative AI and the Mainframe”

Leave a Reply

Your email address will not be published. Required fields are marked *