The EU AI Act introduces a new regulatory framework that distinguishes between, among others, deployers and providers. Employers will typically assume the role of deployers when using AI systems to manage their workforce. However, they may also act as providers if they create or significantly modify the AI systems they use in the workplace. This dual role presents distinct challenges and obligations under the EU AI Act.
In this blog post, we will explore the key differences between acting as deployers or providers and outline the specific obligations that each role entails for employers.
(Check out all the previous articles in our EU AI Act unpacked blog series).
Employers as deployers
Employers will typically act as deployers when they merely purchase and use existing and externally pre-trained AI systems.
However, it remains unclear whether employers will qualify as deployers if their employees are permitted to use freely accessible AI systems via a browser (eg AI-based translation tools). This ambiguity stems from the AI Act's definition of a deployer as an entity that uses an AI system under its own authority for professional purposes. If employers merely allow or tolerate the use of such systems, they may arguably lack the necessary authority to be classified as deployers. Conversely, if these systems are integral to employees’ work performance or company operations, this may suggest that their use occurs under the employer’s authority, leading to a classification as deployers.
In a previous blog, we noted that many (though not all) workplace uses of AI may fall under the high-risk AI system category. Employers should therefore be mindful of the obligations associated with deploying high-risk AI systems, which include:
- Compliance and monitoring: use and continuously monitor high-risk AI system based on the provider’s instructions for use; immediately suspend use and fulfil specific reporting obligations if there are reasons to believe that the system may present a risk to individuals’ health, safety, or fundamental rights or, similarly, in cases of a ‘serious incident’ (Articles 26(1) and (5));
- Human oversight: assign individuals with the necessary competence, training, authority and support to oversee high-risk AI systems (Article 26(2));
- Input data control: where the employer exercises control over the input data, ensure it is relevant and sufficiently representative for the system’s intended purpose (Article 26(4));
- Data logging: keep logs automatically generated by the high-risk AI system for a minimum of six months (Article 26(6));
- Employee information: where required, inform affected employees and relevant employee representatives before putting into service or using a high-risk AI system (Article 26(7)). For further details on this aspect, please check our previous blog ;
- Transparency: fulfil additional information obligations in the case of high-risk AI system that make or assist in decisions about natural persons, including informing employees that they are subject to the use of such AI systems (Article 26(11)). Employees impacted by decisions based on such high-risk AI systems (e.g. rejected applicants) may also have the right to receive clear and meaningful explanations regarding the AI system’s role in the decision-making procedure and the main elements of the decision (Article 86);
- Data protection: incorporate the information provided by the provider into a data protection impact assessment, where applicable (Article 26(9)).
Understanding the shift from deployer to provider
While employers will typically serve as deployers, Article 25 of the EU AI Act establishes that a shift from deployer to provider may occur when employers:
- Put their name or trademark on high-risk AI systems that have already been placed on the market;
- Make a substantial modification to an existing high-risk AI system in such a way that it remains classified as a high-risk AI system, or
- Modify the intended use of a non-high-risk AI system in such a way that it becomes a high-risk AI system.
This can include cases where employers, unsatisfied with existing market solutions, choose to ‘customise’ high-risk AI systems, either independently or in collaboration with developers. Determining whether these modifications trigger a reclassification from deployer to provider will often require case-by-case analysis.
For instance, consider an employer intending to create an ‘AI Assistant’ to answer employee questions about internal policies by providing a general-purpose AI (GPAI) system developed by a third party (eg ChatGPT or Gemini) with specific instructions and the relevant policies. If the employer puts the AI Assistant into service under its own name or trademark, it could be classified as a provider. Conversely, if the employer uses the third-party GPAI system without rebranding it and simply adds its own data to customise the answers, without altering the pre-training data and the architecture of the third-party system, the employer would likely remain a deployer. In any case, any modification or fine-tuning of existing GPAI systems should be carefully assessed before implementation.
Another scenario could involve an employer using a chatbot, originally designed for non-high-risk applications by a third party, for a high-risk use case such as recruitment.
Employers as providers
Employers classified as providers must comply with the additional obligations outlined in Articles 9-22 of the AI Act, which are much stricter than those for deployers.
These obligations include ensuring that any high-risk AI system meets the general requirements for trustworthy AI in terms of data governance, technical documentation and record-keeping, transparency, human oversight, accuracy, cybersecurity and robustness. Employers must also conduct a conformity assessment, implement a quality management system and adhere to strict registration, documentation, and information-sharing obligations. For further details on this aspect, please check our previous blog.
AI literacy
Regardless of their classification as providers or deployers, employers will be required under Article 4 of the EU AI Act to ensure that their employees and other individuals dealing with the operation and use of AI systems on their behalf have a sufficient level of AI literacy. This obligation has taken effect from 2 February 2025.
The AI Act does not specify how companies should achieve AI literacy, giving employers flexibility to design their own approaches. AI literacy initiatives should cover basic AI concepts and skills, such as an understanding of how AI systems function, the types of AI products available, and their uses, risks, and benefits.
Training should be tailored to the employees’ knowledge levels and to the extent and context of AI use within the company. For internal use of tools such as ChatGPT or Gemini, a brief overview of AI Act obligations and responsible AI usage could be sufficient. However, when using AI tools in sensitive areas like HR, more comprehensive training may be required to address specific risks (e.g. discrimination; impact on diversity) and obligations (e.g. human oversight) associated with such use.
For more details on AI literacy see our blogpost. In addition, more guidance from the AI Office is expected later in the year.
Conclusion
The EU AI Act establishes distinct roles and responsibilities for employers as AI deployers and providers, necessitating a thorough understanding of compliance obligations for each role. As the regulatory landscape evolves, employers must proactively assess their use of AI systems, implement necessary oversight and training measures, and ensure their operations align with the Act's requirements.