Europe’s AI Future: Balancing Regulation, Trust, and Innovation

Aug 26, 2025

Uwe Graf is the Lead Modernization Architect at EasiRun Europa GmbH. He is also an IBM Champion, a 2025 Influential Mainframer, and a frequent contributor on LinkedIn.

Regulation is not an obstacle to innovation, but an enabler

What artificial intelligence (AI) opportunities and challenges are unique to Europe? This was the primary question during the kickoff for the GSE European AI Working Group in early July in Frankfurt am Main, Germany. The event brought together leading experts from industry, research, public institutions, and regulatory bodies. The discussions focused on governance, innovation, and real-world applications. The AI Working Group agreed that trust, regulation, and technical excellence should form the pillars of a future-proof AI strategy.

The advancement of AI in Europe is closely tied to the well-defined and comprehensive regulatory framework of the EU AI Act, which took effect in 2024. The EU AI Act is the world’s first comprehensive regulation on artificial intelligence.

The EU AI Act aims to create a consistent and trustworthy ecosystem for AI applications that fosters innovation while minimizing risks.

The Act aims to regulate AI systems based on their risks, ensuring a human-centric and trustworthy approach to AI development. It establishes a unified regulatory framework across Europe that defines technical requirements for AI systems and sets ethical and safety standards. Most provisions of the Act will become fully binding by August 2026. 

Db2 Workload Performance on Fire

Expert Insights: AI Compliance and Risk Management

Professor Dr. André Liebscher from the University of Applied Sciences Kaiserslautern emphasized that many companies still underestimate the scope of these regulations and must urgently adapt their internal processes

In addition to ensuring that existing systems are compliant, the EU AI Act requires:

  • Robust risk management strategies 
  • Ongoing performance monitoring 
  • Comprehensive documentation of models and training data. 

Compliance with these rules is particularly crucial for “high-risk AI systems,” such as those deployed in critical sectors like healthcare, finance, and infrastructure.

Taras Holoyad, Editor of the International Organization for Standardization, stressed that quality assurance must be at the core of every AI strategy. He pointed out that robustness, fairness, explainability, and transparency must be continuously monitored to meet regulatory requirements and user trust. Since many AI models are often perceived as “black boxes,” approaches such as Explainable AI (XAI) are essential to make decision-making understandable and to detect potential biases.

Generative AI and Foundation Models: Governance and Opportunity

Following discussions on regulation and trust, the Working Group shifted to technological advancements such as Generative AI (GenAI) and foundation models. These cannot be viewed in isolation as they are deeply intertwined with governance and compliance. These technologies form the backbone of the next wave of innovation, offering new possibilities for automation, creativity, and efficiency across businesses, government agencies, and society.

Professor Jonas Offtermatt from Baden-Wuerttemberg Cooperative State University Stuttgart explained that combining deep learning approaches with transformer architecture—initially developed for language models—inadvertently laid the foundation for the current paradigm of generative AI. 

Unlike classical machine learning methods, these models can detect patterns and autonomously generate new content such as text, images, or code. Use cases like conversational chatbots, code generation tools for software development, or multi-agent systems demonstrate how GenAI can – and is – accelerating and personalizing workflows.

Dr. Wolfgang Hildesheim of IBM Germany and Dr. Marcel Ziems from the University of Leibniz Hannover showcased concrete applications of foundation models in the public sector, particularly geoinformation systems.

One striking example was AI-powered building recognition for cadastral maps, where large-scale orthophotos are automatically analyzed using specialized deep learning models. This solution not only reduces the effort of manual mapping but also enables much faster updates of existing datasets. An “AI factory” approach is used here, ensuring both agile model adaptation and rigorous quality control.

Khadija Souissi from IBM highlighted the strategic importance of enterprise-specific foundation models for finance, insurance, and logistics industries. These models, trained or fine-tuned directly with a company’s internal data, can automate processes, minimize risks, and significantly boost operational efficiency. According to Souissi, with this technology, the IBM z17 platform enables real-time analytics and predictive AI capabilities—tasks that previously required weeks or even months of manual evaluation.

The IBM z17 platform enables real-time analytics and predictive AI capabilities—tasks that previously required weeks or even months of manual evaluation.

At the same time, the group repeatedly stressed the need for strict governance structures around generative AI. Key challenges noted include:

  • preventing hallucinations (incorrect or fabricated outputs)
  • minimizing bias
  • ensuring the traceability of model decisions

Europe’s Path to Responsible AI Leadership

Europe’s strict regulatory framework and values-driven approach stand out in a global landscape where data privacy, ethical standards, and data ethics are gaining increasing attention. Focusing on transparency, accountability, and trust may be a unique competitive advantage for EU countries.

In the near term, the Working Group urges companies to develop guidelines for responsible AI use, ensuring transparency, ongoing result validation, and adherence to ethical standards. 

The successful adoption of generative AI in Europe depends on technological excellence, the ability to establish clear rules, and robust quality assurance.

Europe can play a leading role in shaping and governing responsible AI. Unlike some regions, Europe is not solely focused on speed and disruptive innovation at any cost.

The three main traits that will aid Europe’s successful AI development, use, and governance are:

  1. Long-term perspective taking. The EU will continue combining technological excellence with regulatory foresight and ethical principles. This strategic approach offers a long-term advantage, particularly in sensitive sectors such as healthcare, finance, or public administration, where trust is the key to success.
  2. “German Angst” as a strength. While other markets often rely on “trial and error,” Europe adheres to the principle of “better to think things through thoroughly than to fix mistakes in haste.” One might even say that the proverbial “German Angst” is becoming a competitive advantage: rigorous skepticism and critical questioning of technologies lead to developing more stable, secure, and trustworthy systems in the long run.
  3. Regulation as a competitive advantage. We consider regulation not an obstacle to innovation but rather an enabler. When intelligently aligned with technological progress, it’s a catalyst. Generative AI and foundation models present tremendous opportunities if combined with governance, fairness, and transparency. This makes Europe a hub where responsibility and progress go hand in hand.

Next Steps and EU AI Compliance Resources

The next GSE European AI Working Group will convene at the 19th International GSE Conference, September 17–19, at the Stuttgart Marriott Hotel Sindelfingen in Germany. Registration for the AI Working Group is open.

Download the free EU AI Act self-assessment checklist to identify compliance gaps, reduce legal risk, and ensure your AI systems align with the law.

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *

Sign up to receive the latest mainframe information

This field is for validation purposes and should be left unchanged.

Read More

The Race to Cool the World’s Data

The Race to Cool the World’s Data

“Managing heat generated by data centers will become progressively more challenging – and, at the same time, necessary.” Chris Frye, Director of Chiller Sales, LG Electronics U.S.A. Inc. According to McKinley, global demand for data center capacity could rise between...

How Do I Love Thee, Mainframe? Let Me Count the Ways

How Do I Love Thee, Mainframe? Let Me Count the Ways

With apologies to Elizabeth Barrett Browning for amending “Sonnet 43” for the headline, today is a celebration of love. In the spirit of Valentine’s Day, we’re waxing poetic about those we love, those we rely on and spend time with, those we trust with our problems...