EU AI Act Explained: What Businesses Must Know - Risks, Rules & AI Literacy with Paul Larsen, our Head of Data & AI |🎙️#58

Illustration of a person and a robot with the text "DevOps Accents Episode 58. EU AI Act Explained: What Businesses Must Know - Risks, Rules & AI Literacy" on an orange background.
Last updated: | Published:
Illustration of a person and a robot with the text "DevOps Accents Episode 58. EU AI Act Explained: What Businesses Must Know - Risks, Rules & AI Literacy" on an orange background.

For episode 58 of DevOps Accents, Paul Larsen, our Head of Data & AI, returns to the podcast to unpack the EU AI Act - a cornerstone of AI regulation in Europe and one of the most comprehensive AI laws and regulations introduced globally - and what it really means for businesses — from startups to enterprise giants. With Pablo and Kirill, we dive deep into the current state of AI regulation in Europe, how to navigate AI risk categories, and why AI literacy isn't just a buzzword — it’s already law.

🚨 Spoiler alert: If your company uses AI, you will be affected. The question is: are you prepared?

In this episode:

  • The speed of implementation of EU AI Act;
  • Total AI freedom for defence;
  • AI risk categories;
  • Understanding AI high risk;
  • AI literacy push.

You can listen to episode 58 of DevOps Accents on Spotify, or right now:


Understanding the EU AI Act's Rapid Implementation and Its Impact on Innovation

As AI adoption accelerates, the EU’s AI Act introduces significant regulatory obligations for companies developing and using AI systems. Paul Larsen, Head of Data and AI at mkdev, provided insights into how this new legal framework is shaping AI deployment, especially for businesses in Europe.

Paul highlighted a key difference between the AI Act and earlier data laws like the GDPR: “The AI Act applies only to AI systems that are actually being used in the market, with minimal obligations during the R&D stage,” he explained. This focus on active deployment means companies can experiment more freely—but once those experiments move into production, compliance becomes essential.

Leo noted that the Act’s speed of implementation, especially compared to GDPR, shows the urgency of regulating AI. With EU AI Act approval now finalized, the clock is ticking for organizations to comply with the latest AI regulation news. Paul added that the viral rise of large language models prompted lawmakers to add entire sections to the original proposal. “The act had to catch up with generative AI developments post-2022,” he said.

AI in Defense: Exploring the Unrestricted Freedoms and Their Business Implications

Pablo questioned whether Europe’s role in regulation is reactive, given that most large AI models are developed in the U.S. or China. Paul agreed to some extent but emphasized that many enterprise AI use cases don’t rely on massive models like GPT or Claude. “The majority of applied AI in businesses still comes from older, university-developed techniques,” he said, pointing to models used for recommendation systems and fraud detection.

Leo raised a more philosophical point—whether the AI Act is truly about safety or a bid for global regulatory dominance. Paul pushed back gently: “If the goal were power, the Act would be a failure. It focuses explicitly on areas where AI could impact health, safety, and fundamental rights. It leaves space for innovation where risk is minimal.”

Deciphering AI Risk Categories: From Prohibited to Minimal Risk Systems

The regulation introduces an AI risk pyramid approach, breaking down systems into clear AI Act risk classifications. Among the examples of unacceptable risk in the EU AI Act are systems involving mass surveillance and real-time facial recognition in public spaces.

Kirill asked why the regulation specifies certain tools—such as AI in fake review generation—when the behavior itself is already illegal. Paul acknowledged this overlap but emphasized that the AI Act is about method as much as outcome. For example, while fake reviews are already banned, using AI to mass-generate them represents a new scale of potential abuse.

Leo and Kirill also discussed the Act’s categorization of AI systems into three risk levels: prohibited, high-risk, and minimal risk. Paul clarified that high-risk systems involve use cases like hiring, healthcare, or financial decisions, where mistakes can have significant consequences. Minimal-risk systems, like chatbots, need only basic transparency such as a disclosure: “I am an AI.”

High-Risk AI Systems: What Businesses Need to Know to Stay Compliant

Kirill noted an eyebrow-raising exemption: defense use of AI is completely unrestricted under the Act. Paul confirmed this and added that private contractors working for defense are also exempt. This sparked discussion about potential workarounds, like municipalities outsourcing surveillance systems to military contractors—an area that remains murky.

Paul also discussed systemic risk, especially from large language models that are deeply embedded in the AI supply chain. “If something goes wrong with a general-purpose model, the consequences ripple out across every system that uses it. That’s a different kind of risk altogether,” he said.

Being labeled as a high-risk AI system brings AI liability directives into play, along with stricter artificial intelligence rules and AI restrictions that could significantly affect your development lifecycle.

Action Steps for Companies: Inventory, Literacy, and Avoiding Overreaction

The EU AI Act mandates an AI literacy framework for employees interacting with AI systems. But who needs AI literacy training exactly? "Anyone affected by or working with AI under the new AI regulatory frameworks," Paul says.

Paul laid out practical advice for businesses trying to navigate the new landscape. First, they need to build a full AI inventory to understand where and how AI is being used. “If you don't know what you have, you can't manage the risk,” he said. From there, companies can identify which systems fall into high-risk categories and begin compliance efforts—ideally before the law comes fully into force in 2026.

Paul warned against both underestimating and overreacting to the Act. Some companies freeze all innovation out of fear, while others remain unaware they’re even affected. “It’s not the crime, it’s the coverup,” he joked, borrowing a line from a Deutsche Bank risk leader. Best effort and documented steps toward compliance go a long way.

The Push for AI Literacy: Preparing Your Workforce for Upcoming Regulations

One key part of the AI Act already in force is AI literacy. Pablo and Leo asked Paul to elaborate on what this means in practice. “It’s not about knowing machine learning algorithms,” Paul said. “It’s about knowing the system you're interacting with, what can go wrong, and how to spot it.”

Paul argued that AI literacy should be more like workplace safety briefings: simple, practical, and tailored to actual risks. For example, someone using a fraud detection system doesn’t need to understand neural networks—but they do need to know what kinds of false positives the system might produce and how to react.

Kirill raised concerns that companies might approach this like they do GDPR training—checkbox compliance via generic quizzes. Paul advised against it. “The Act’s spirit is about empowering users. Off-the-shelf platforms won’t get you there unless they’re tailored.”

Final Thoughts

With regulations now in force and more coming soon, companies must begin AI risk assessments, literacy programs, and system audits sooner rather than later. As Paul put it: “You don’t want to scramble at the last minute. If you build in the right processes now, compliance will be smoother and your innovation pipeline won’t suffer.”

The EU AI Act is here, and ignoring it isn’t an option. Whether you’re building AI products or just using them internally, mkdev can help you:

  • Audit your AI systems and classify risks
  • Build your AI inventory and documentation
  • Deliver targeted AI literacy training—practical, not theoretical

Let’s talk. We’ll make sure you’re not just compliant — you’re confident. Book a free consultation or reach out directly at paul@mkdev.me.


Show Notes


Podcast editing: Mila Jones, milajonesproduction@gmail.com