A new year also means new regulation. In 2026, there are already several anticipated changes in the field of AI that will keep HR and labour law departments busy. In this new series of blog posts we will elaborate more on such developments and expected developments, starting with the following in this first piece:
- The AI Act – the EU regulation governing the use of AI – enters a new implementation phase this year, which is particularly significant for employers, although the exact timing is subject to further political debate.
- The EU Commission's ‘Omnibus’ procedure includes proposals to adapt the deadlines for implementing the AI Act, which are currently under discussion in the Council of the EU and the EU Parliament. This could shift the timeline for implementation.
For companies, it is crucial to adapt to this increasingly complex regulatory environment and to build not only technical AI expertise but also regulatory knowledge, see also our previous blog post here. This briefing provides an overview of the AI developments that are relevant from a labour law perspective.
The use of high-risk AI in the HR sector
The AI Act introduces phased implementation obligations. From August 2026, requirements relating to high-risk AI systems are expected to become binding and must be fully implemented (see below for more details).
What does this mean for employers?
Entering into force on 1 August 2024, the AI Act adopts a risk-based regulatory approach, classifying AI systems according to application area and risk potential. The Act divides AI systems into four risk categories.
- Minimal risk: Non-critical AI systems, such as basic automations (e.g. spam filters) are not subject to specific obligations under the AI Act.
- Limited risk: AI systems with limited impact on employees (e.g. self-service portals or chatbots with AI algorithms) are subject to special transparency requirements. For example, affected employees must be notified when interacting with these systems.
- High risk: This category includes systems posing increased risks to individuals or society. Annex III of the AI Act lists some of the systems to be considered ‘high-risk’ in this regard, with HR applications highlighted among key categories.
- Unacceptable risk: AI systems threatening people or violating fundamental EU values (e.g. systems used for "social scoring" or for emotion recognition) are generally prohibited.
In the HR sector, many AI systems fall into the ‘high-risk’ category. Examples include automated applicant selection, performance evaluation, and workplace monitoring. These systems are permitted only under strict conditions, which must generally be in place from August of this year at the latest. Key conditions for high-risk AI in HR include the following.
- Organisational measures: Employers must implement appropriate technical and organisational safeguards to ensure the safe and lawful operation of the AI system in accordance with the provider's specifications and instructions.
- Human oversight: Automated decisions may not be made without adequate review and human influence. Appropriate individuals must be properly qualified and trained. Ongoing training is mandatory to ensure compliance.
- Control and documentation: Employers must ensure that input data is relevant and representative for the AI’s purpose. Operation and decisions of AI must be logged and documented, with records retained for at least six months.
- Transparency: Before using high-risk AI, both employee representatives (notably the works council) and affected employees must be informed in a comprehensive and understandable way. Relevant local rules requiring the involvement of employee representatives (e.g. under the German Works Constitution Act) must also be observed.
Employers are responsible for the selection, integration, and ongoing monitoring of high-risk AI systems. Therefore, regular security checks, robust documentation, transparent reporting procedures, and comprehensive employee training are strongly advised.
Employers should also clarify whether they are acting as ‘operator’ or ‘provider’ of AI systems, the latter of which entails more extensive obligations.
‘Omnibus’ procedure
On 19 November 2025, the European Commission introduced a ‘digital omnibus package’ aimed at revising and harmonising key EU legislation relating to the digital single market. Its objectives are to close regulatory gaps, eliminate overlaps, and enhance practicability and legal certainty for companies.
Key elements of the omnibus package include the following.
- Simplifying information and documentation obligations for companies.
- Strengthening the link between the AI Act and the GDPR.
- Harmonising the reporting threshold for data breaches under Art. 33 GDPR, so that employers would only need to report cases that are likely to result in a ‘high risk’ to individuals.
- Allowing employers to refuse or charge a reasonable fee for abusive or excessive employee access requests under Art. 15 GDPR.
- Introducing a more central and strengthened role for the AI Office in supervising and enforcing the AI Act, with clear responsibilities and centralised EU procedures.
A key aspect of the omnibus package is the planned adjustment of implementation deadlines for high-risk AI under the AI Act. Rather than August 2026, applicability of high-risk obligations will be tied to the availability of relevant technical standards and tools. These obligations will therefore only take effect once Commission guidelines and harmonised standards are available. According to the Commission's plans, this could postpone the implementation of certain key regulations by up to 16 months, to 2 December 2027.
The omnibus package remains a proposal and is under consideration by the Council of the EU and the European Parliament. Businesses should therefore continue preparing for a possible entry into force on 2 August 2026, but monitor developments closely.
Involvement of employee representatives
Even if certain implementation deadlines under the AI Act are postponed, we expect engagement with employee representatives to intensify in 2026. AI is not only seen as a tool to facilitate work, but also as a potential risk to job security. Since the introduction of new AI systems, not just in the HR sector, European countries’ legislation generally requires mandatory engagement with employee representatives.This engagement process should ideally be completed before costly AI systems are acquired. Not consulting the relevant employee representatives or, where applicable, not having obtained their consent when introducing or using AI, may result in investigations or enforcement risks in certain jurisdictions. Against this background, it is advisable to proactively engage with employee representatives and establish general principles for AI through framework agreements. This can help facilitate the introduction of new AI systems and reduce scepticism amongst employees and their representatives.
Additionally, introducing an AI policy that sets out rules for the use of AI in the workplace appears to be a sensible step for employers.
