The carrot and stick of the EU AI Act's literacy requirements: Benefits, Compliance, and Risks


As AI adoption in business accelerates, AI governance and compliance are now critical. With the EU’s AI Act in-force since February 2nd, 2025, including a rather broad requirement for AI literacy, companies must ensure AI literacy to mitigate risks like data leaks and regulatory fines. In our previous article, we saw that ANY use of ChatGPT or other AI chatbots at work mean that your company is required to ensure appropriate AI literacy for relevant staff.
In this post, we guide you step-by-step through how you can establish targeted, effective AI literacy in your company so that you gain even more benefits from AI chatbots while minimizing risks like data leakage or being tricked by biased responses.
Part 1: The Carrot
The Benefits of AI Compliance – Responsible AI, Higher Productivity & Engagement
AI ChatBots share some similarities with good ol' search engines, yet the risks and benefits are sufficiently different that literacy on the failure modes and prompting-tricks among users in your company can help you avoid data breaches (like the Samsung employee who entered proprietary code into ChatGPT, or the law office lacking proper guardrails for chatbot usage), mistakes from improper use while increasing the effectiveness and adoption of your staff using AI chatbots. That is why it is wise to invest in corporate AI responsibility to help your business harness AI’s potential.
Effective usage and adoption of AI chatbots in business
We all had to learn how to get Google to return valuable results. If it's become second nature to you, then look at the YouTube clip at 45s from the show Great News, where the retirement aged intern and mother of main character (mis-)uses a search engine for the first time.
We learned to drop common words like articles, and include query specific terms like concrete nouns and names. These tricks helped yield better results due to the machine learning under the hood of early search engines, such as Google's PageRank algorithm, where web sites with common words were given relatively lower weight and uncommon ones with links from other pages higher weight in finding hits.
AI ChatBots work best with different inputs than search engines, and also pose different risks of misuse. Good AI governance helps businesses use them responsibly while reducing errors and misinformation. To get the most benefit out of AI ChatBots, it's key to formulate your task description in a way that maximizes the chance of getting a true and useful reply while minimizing the risk of "hallucinations," meaning correct sounding but factually incorrect responses.
Some examples are
- the original GPT3 paper, which demonstrates how providing a few examples in a prompt can increase a Question-and-Answer task performance accuracy from 60% to state-of-the-art,
- the chain-of-thought technique to guide an AI chatbot through a series of intermediate steps, and
- specific prompt guidance for more advanced AI chatbots.
You can already see potential for data leaks and misuse. First, data leaks. AI chatbots typically yield better results the more context or background data you give them. What if the relevant background is proprietary knowledge or data?
Data leak risks of AI chatbot misuse
- illegal transfer of personally identifiable data (PII) to chatbot providers, resulting in GDPR violations
- leaking of sensitive company data to chatbot providers
GDPR requires that data processors get specific permission to use personal data, and grants data subjects the right to know where and how their personal data is used. If your company is uploading personal data of its customers (or employees) to an AI chatbot without proper authorization and controls, your company is subject to fines of up to 20m EUR or 4% of annual revenue, whichever is greater.
When the Samsung IP leak story broke, one of the disaster scenarios being discussed was extremely unlikely. The uploading of proprietary Samsung code to ChatGPT is extremely unlikely to result in that IP being played back directly to other ChatGPT users ("Hey ChatGPT, would you please share with me the proprietary Samsung code its engineer shared?"). This isn't how large language models work.
Nevertheless, any uploaded data has left your company's IT, meaning the attack surface for cybercriminals to find and abuse your company's data has increased accordingly.
Algorithmic and data bias failure modes of AI chatbots
Here we mean bias in the technical sense of either a dataset or algorithm whose properties differ significantly from the entire population. In terms of datasets, the training data for the LLMs powering AI chatbots have date cutoffs, meaning that the underlying AI hasn't been trained on the recent, and perhaps the most relevant, data. Some AI chatbots have added features to mitigate this shortcoming, but it's still key for users to recognize this fundamental limitation, and adjust usage accordingly.
Similarly, the algorithms that power AI chatbots have been developed to perform well on a large number of tasks, but not equally well on all tasks. As an example of this algorithmic bias, LLM performance in the game of chess shows both improvements and dramatic worsening as models are updated and "improved."
Put in other terms, there still is "no free lunch": you and your company need to be aware which uses of AI chatbots play to their strengths, and which ones are likely to trigger weaknesses.
Part 2: The Stick
The Risks of AI Non-Compliance – Fines, Reputation Damage & Regulatory Impact
Failing to meet AI regulatory compliance could result in hefty fines, reputational harm, and loss of customer trust. The most serious violations will result in fines of 7% of your company's annual revenue or 30 million EUR, whichever is higher. Failure to meet the AI Literacy requirement by itself does not result in fines, yet there are still plenty of reasons to take the requirement seriously.
For one, the highest fines go with egregious violations. If your company has already met your AI Literacy requirement, then
- your company is more likely to find (and fix) other AI Act violations before fines are issued
- your company will demonstrate "best effort" to the regulators with AI literacy requirement
The second point is a key one. Regulators are people too, and, in my experience, reasonable ones who want to work with the companies they are regulating, not against them. By enacting a robust AI Literacy program for your company, you will start your AI Act relationship with national regulators on the right foot.
How does my company meet the AI Literacy requirement?
The full wording of the AI Literacy requirement makes it clear that there is no one-size-fits-all approach. Here's the full text of Article 4, with emphasis added:
Providers and deployers of AI systems shall take measures to ensure, to their best extent, a sufficient level of AI literacy of their staff and other persons dealing with the operation and use of AI systems on their behalf, taking into account their technical knowledge, experience, education and training and the context the AI systems are to be used in, and considering the persons or groups of persons on whom the AI systems are to be used.
Sitting your staff in front of a few YouTube videos will almost certainly not provide the required literacy, since
- AI literacy has to address your company's specific usage of AI
- AI literacy has to respect your current staff's backgrounds
- AI literacy needs to consider all end-users of each AI system
What then do you need to satisfy the AI Literacy requirements?
Identify and describe the AI systems in usage at your company with an AI Inventory
Cluster your company's AI systems into groups of related functionality and failure modes
Cluster your staff into groups according to their usage of AI and backgrounds
Roll-out literacy programs to cover each distinct combination of AI system group and user group.
It is often possible to re-use training material from one group for another. Here the key is to focus on the benefits and failure modes of the given (group of) AI systems, together with how the relevant staff can understand and proactively use this knowledge.
It's tempting for people like myself with PhDs and years of experience in statistics, data and programming to claim that our hard-won knowledge is essential for AI Literacy, but this is rarely true. That's why we at mkdev have largely cut out the typical explanations of statistics, generic machine learning approaches and theory in favor of real-life examples and accessible mental models that give staff working knowledge, and not just impressive sounding theory.
The AI Act makes AI literacy a requirement, and we make it a reality. We build a tailored AI literacy program that empowers your team and keeps your business compliant.
The result? AI users who get better results from AI while mitigating the risks of misuse.
Series "EU AI Act Explained by Paul Larsen"
- What is the EU AI Act, and why does it matter?
- When does the EU AI Act come into force, and what does this mean for your business?
- My company is using ChatGPT. Does the AI Act Literacy requirement apply to us?
- The carrot and stick of the EU AI Act's literacy requirements: Benefits, Compliance, and Risks