arrow_back Back to Insights
Insights on AI Act Article 4

Insights on AI Act Article 4

Disclaimer: This is a post to spread general awareness, not legal advice.

As it is now a little over a year since the AI Act Article 4 started applying (on February 2nd 2025), we should hurry to take a look at the AI literacy requirements of the said Article 4. It states

‘Providers and deployers of AI systems shall take measures to ensure, to their best extent, a sufficient level of AI literacy of their staff and other persons dealing with the operation and use of AI systems on their behalf, taking into account their technical knowledge, experience, education and training and the context the AI systems are to be used in, and considering the persons or groups of persons on whom the AI systems are to be used.’

It is intentionally a little vague to accommodate for the fact that artificial intelligence is a fast moving field and detailing the requirements too tightly could result in outdated rules very quickly. In short, the requirement for AI literacy aims to make sure people dealing with AI systems know about the technology, how it affects them and its potential for both good and bad, in the context they deal with AI. Excluded is personal, non-professional use.

AI Literacy

An example of the context, take the following scenarios:

  • A knowledge worker using ChatGPT to research information about a new productivity tool
  • A data scientist gathering training data for a new state-of-the-art machine learning predictor
  • A developer making an LLM-powered HR app for job application filtering

You might think these scenarios would warrant for different types of AI literacy, and you are correct. Article 3 point 56 defines: ‘AI literacy’ means skills, knowledge and understanding that allow providers, deployers and affected persons, taking into account their respective rights and obligations in the context of this Regulation, to make an informed deployment of AI systems, as well as to gain awareness about the opportunities and risks of AI and possible harm it can cause.

Providers and deployers of AI systems (Article 3 points 3, 4 and 1, respectively) are quite a broad audience and when you add ‘affected persons’ to the mix, a lot of people need to understand a lot of different systems. Therefore, the AI literacy effort should start with identification of what AI systems you have in use or are providing, and what kind of people are affected.

Identification

It all starts with the AI systems in your organization. Are you developing AI systems for internal or client use, what kind of AI systems do you use? On-premises inference endpoints, proprietary chats like ChatGPT, Claude and Gemini, classical machine learning applications and tools using LLM APIs all read as AI systems. This is a perfect opportunity to also investigate Shadow AI, when staff are using for example proprietary chats that you have not commissioned for organisational use. Shadow AI deserves its own blog post, so we don’t go further down that path here.

Check the risk categorisation of the AI systems in development or use, per the AI Act definitions. Chapters II and III, also Annex III tell you about prohibited and high-risk systems. Prohibited systems are, well, prohibited (in most cases). High-risk systems should be taken into account when implementing the AI literacy initiative.

Get a good understanding of your staff. What is the technical baseline of non-technical staff, what kind of experience do they have, etc. Even AI developers and data scientists are covered under the requirement to ensure sufficient level of AI literacy. This could be, for example, to provide training on ethics. In addition to staff, other persons affected by your developed or deployed AI systems might need training provided by you.

Based on these assessments, you can start crafting or improving your AI literacy initiatives. Make sure to take the whole context into consideration and be realistic about the extent of training needed based on resources, organisation size and AI you work with.

Compliance

The enforcement of Article 4 will begin August 2nd 2026 onwards. The enforcement and supervision is mostly under the jurisdiction of national market surveillance authorities in Member States. Some information of these authorities is published on the web, for example Market Surveillance Authorities under the AI Act. Penalties for non-compliance can be imposed as fines, but the penalty is meant to be proportionate meaning that accidentally forgetting to train one office assistant on hallucinations in a chat application will most likely not net a multi-million euro penalty.

Documentation of the AI literacy initiatives and metrics of completion should be kept for the organisation, since this would presumably be the main avenue for compliance checks by authorities. Documentation should include both the approaches and results of the initiatives. For example, not only providing training material and workshops, but also keeping measure of attendance.

As a rule of thumb, it takes you a long way if you can use the documentation to show what AI systems you operate, what do people do with them and make an argument that a best effort was made to ensure sufficient AI literacy respecting the differences in prior technical knowledge of staff and affected persons.

Resources and contacts

You can look for inspiration in the AI literacy practices repository. We would like to emphasise that replicating any of these practices in your organisation does not automatically grant compliance with Article 4. The AI literacy initiative has to take into account what you do and what AI systems you use. Having said that, finding a fitting organisation and looking at what they did can provide some ideas of scope and implementation.

European Commission also hosts an excellent Q&A collection for AI literacy in terms of Article 4. These insights can provide useful information especially on compliance and enforcement.

European Digital Innovation Hubs offer services especially for SMEs and public organisations. You can browse the Hub catalogue to find one closest to you and look or ask for free training and materials. Note: we do not make any guarantees of availability of these free resources.

We can, however, help you get started or improve on the current initiatives. At Redfield, we have been providing tailored AI training for organisations spanning from enterprise to European institutions. If you want to talk more about Article 4 or are looking for help around your AI literacy initiatives, don’t hesitate to contact us!