AI is being used increasingly in the workplace and is becoming more valuable regarding almost all areas of employment. On the one hand, companies themselves are using AI systems for a wide variety of employment purposes, for example when hiring new employees, assigning work tasks, or monitoring employees. On the other hand, employees use AI in order to perform their daily tasks.
In this blog post, we take a closer look at (i) the role of representatives in the AI Act, (ii) the legal situation in Germany as an example and (iii) the first decision in Germany regarding participation rights about the introduction of AI.
1. Role of Employee Representatives in the AI Act
The AI Act follows a risk-based approach and introduces different risk categories for AI systems, as explained in an earlier post.
Since AI systems triggering unacceptable risks will be banned under the AI Act, there is no regulation regarding the role and/or participation of employee representatives in respect of such systems.
A lot of AI systems in the employment context will be considered high-risk AI systems, in particular: (i) AI systems intended to be used for the recruitment or selection of future employees, in particular to place targeted job advertisements, analyse and filter job applications and evaluate candidates; and (ii) AI systems intended to be used to make decisions affecting terms of work-related relationships, the promotion or terminations of work-related contractual relationships, to allocate tasks based on individual behaviour or personal traits or characteristics or to monitor and evaluate the performance and behaviour of persons in such relationships.
Employers using such high-risk AI systems shall inform both employee representatives as well as the employees affected that they will be subject to the use of the high-risk AI system before putting into service or using a high-risk AI system at the workplace. This information shall be provided, where applicable, in accordance with the rules and procedures laid down in Union and national law and practice on information of employees and their representatives (Art. 26 Sec. 4) (it is not entirely clear whether the AI Act creates a stand-alone information obligation).
There are no such obligations regarding AI systems with special transparency obligations or AI systems with minimal risk.
In addition, it is being clarified that the obligations under the AI Act is without prejudice to obligations of employers to inform or consult workers or their representatives in accordance with Union or national law and practice (see recital No. 92 of the AI Act).
2. Example: Legal situation in Germany
The German Works Constitution Act (Betriebsverfassungsgesetz - BetrVG) explicitly refers to the term ‘artificial intelligence’ in three provisions, without defining it:
- the consultation of an (external) expert by the works council is deemed necessary if the works council has to assess the introduction or use of AI;
- the employer must inform the works council sufficiently in advance about the planning of work processes and procedures ‘including the use of’ AI; and
- guidelines on selection for recruitment require the consent of the works council if AI is used in their creation.
These new sections have been added to ensure that works councils are able to understand, evaluate and help shape complex information technology contexts. When it comes to AI, there is an ‘undeniable need for support from works councils’ according to the legislator.
Depending on the individual case, the planned use of AI in the company may trigger further genuine co-determination rights according to German law. Genuine co-determination rights include the right for the works council to call upon a conciliation committee to reach a binding agreement. In most cases, the works council has a right of initiative to approach the employer and demand the implementation of measures, possibly indirectly through a conciliation committee. The co-determination rights potentially relevant for the use of AI include, in particular (i) the organization of the company or the conduct of the employees in the company, (ii) the introduction and use of technical equipment intended to monitor the behaviour or performance of employees, and (iii) prevention of accidents at work and occupational illnesses as well as on health protection within the framework of statutory regulations or accident prevention regulations.
The most relevant co-determination right with regard to the use of AI would arguably be the one with regard to the introduction and use of technical equipment intended to monitor the behaviour or performance of employees (Sec. 87 Para. 1 No. 6 BetrVG). According to the German Federal Labour Court, this right also covers devices that are objectively suitable for monitoring employees, i.e. not only those that explicitly have the purpose of monitoring. The intention of the employer, i.e. in particular the question of whether it actually wants to evaluate the data available to it, is therefore irrelevant.
3. First decisions: Hamburg Labour Court
Earlier this year; the Hamburg Labour Court had to decide on a case where the employer directed the employees to use ChatGPT as a tool for work (reference: 24 BVGa 1/24). ChatGPT was not installed on the employer's computer systems but was used by the employees via a web browser and required the employees to set up their own private account with ChatGPT. The works council had applied for legal protection to order the employer to prohibit the use of AI systems as they argued that the instruction to use ChatGPT violated co-determination rights of the works council.
The Hamburg Labor Court ruled that the works council had no right of co-determination. The regulations, especially the co-determination right regarding the introduction and use of technical equipment (see above, under 2.) were not relevant as the AI systems were not installed on the employer's computer systems but were accessed via a browser and the employees logged into the system with private accounts. The employer therefore did not receive any notification or data on when which employee used the application and for what purpose. Even if the provider of ChatGPT, for example, records this data, this does not lead to co-determination, as this ‘monitoring pressure’ is not exerted by the employer.
However, it seems that the decision of the Hamburg Labour Court can only be applied to cases in which the employer does not receive any information about which employees use the AI systems, how and to what extent, or could receive such information from the third party by agreement. If, the employer has access to this usage data, it can be anticipated – at least according to the broad understanding of the German Federal Labour Court – that the works council would have a right of co-determination, assuming the AI system would be objectively suitable for monitoring employees.
In our next blog, we will take a closer look at how the protection of data privacy is considered under the AI Act.