The first substantive parts of the EU’s AI Act are now binding on businesses in the EU – and beyond. Although most of the new law’s obligations will start to come into effect on 2 August 2026, the first tranche of rules became applicable today (2 February 2025). From now on, companies providing or deploying AI systems need to comply with ‘AI literacy’ requirements and certain AI practices are prohibited.
In this blogpost we’ll focus on AI literacy and provide practical tips to explain how companies can work towards compliance with this broad obligation. For those interested in checking whether they are subject to the prohibition of certain AI practices, please see our ‘#2 Which types of AI are regulated under the AI Act?’.
[You can find all episodes of our EU AI Act unpacked blog series by clicking here.]
New ‘AI Literacy’ obligations
The AI Act states that companies that provide (ie develop), or deploy (ie use) AI systems, must take measures to ensure, ‘to their best extent’, a sufficient level of AI literacy of their staff and others dealing with the operation and use of AI systems on their behalf. Key aspects include:
- No specific definition of the obligation: The AI Act does not specify how AI literacy should be achieved. The details are left to the companies to decide. In the future, voluntary codes of conduct or guidelines published by the EU’s AI Office or national authorities will provide guidance and best practices. The AI Office guidance is expected to be published by the end of 2025. Still, the obligation is already applicable now. We’ve seen in reports that the AI Office expects the companies to have their staff skilled up at least to a general level of knowledge around the AI Act from 2 February 2025 onwards.
- ‘Sufficient level’ of training: In any case, an important part of any AI literacy toolkit will be appropriate training. According to the AI Act, such training should be adapted to the level of experience and knowledge of employees and the extent and context of the use of AI in the company. For the mere internal use of AI tools such as Large Language Models generating text or images (like a chatbot, or tools such as ChatGPT or Gemini), a short presentation providing an overview of relevant obligations under the AI Act and how to use AI might be sufficient. However, when using AI tools with an external impact or in HR, it might be necessary to provide more specific training that addresses risks in the relevant context. The same applies to staff involved in developing AI applications or models.
- Promotion and form of training: The EU AI Act does not prescribe the form of training, and what is appropriate will depend on the context. However, we’ve seen reports about the AI Office engaging on AI literacy with companies that seek early compliance with the AI Act. According to these reports the AI Office expects actively promoted training and guidance from companies towards their staff. No guidance has been given so far on the form of the training. In some cases the training material might be a short PowerPoint presentation, for example, that contains links to further material (information from the EU Commission, legal texts, blog posts, etc), supported by references to the training and explanations in established communication channels (intranet, e-mail, company wiki etc). Training videos might also be used.
- Linking up AI literacy training with an AI policy: A separate general AI policy that covers AI compliance within the company more broadly might be part of the AI literacy program. Such policy can outline a framework for permitted use of AI. An AI policy could also be used to link further training material and distribute it within the company. For example, a policy may stipulate that employees must read the training presentation before using the AI tools (before receiving the company login data), could include clauses addressing and mitigating risks inherent to the AI tools used in the company, and include a list of AI tools approved or available to employees of the company.
Preparation for enforcement can create broader benefits
AI literacy will be enforced by national authorities, which in a majority of cases have yet to be appointed (that may not occur until summer 2025).
Aside from being required for compliance with the specific AI literacy obligation, AI training is also useful to mitigate the risks arising from the use of AI tools, particularly those that may have an external impact or operate in the HR space. Risks such as intellectual property law, labour law and data protection law compliance, protection of trade secrets and personal rights, and the risks of damage to reputation can in practice be reduced by establishing an appropriately robust training programme in support of sufficient staff awareness, sensibility and knowledge about the risks and pitfalls of AI adoption.
Outlook
Against the backdrop of the new AI literacy obligation, it is often advisable to offer training and training materials in a timely manner, and to develop AI policies. Training programmes should be adapted to the needs and risk profiles of each organisation. More guidance on best practices for AI literacy is expected from the AI Office later this year.
Meanwhile, it is hoped that a forthcoming workshop on AI literacy requirements organized by the EU’s AI Office on 20 February 2025 will provide some early insights. That workshop is open to all companies that signed the EU’s AI Pact (an instrument offered by the European Commission seeking early voluntary compliance with key principles of the AI Act ahead of legally binding deadlines) and will include the presentation of a repository of AI literacy practices.