This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.

Freshfields TQ

Technology quotient - the ability of an individual, team or organization to harness the power of technology

| 5 minute read

EU AI Act unpacked #16: Risks associated with AI systems used in the workplace

[You can find all episodes of our EU AI Act unpacked blog series by clicking here.]

The use of AI has proliferated in recent years. AI sits at the heart of the global trend towards digitalisation, and its various applications have a huge potential to improve the ways in which businesses run and in which we, as consumers, interact with them and with each one other. In an employment context, AI is quickly becoming a valuable tool for recruitment, work allocation, employee monitoring and many other aspects of the working relationship.

In this blog post, we take a closer look at (i) the phases of an employment relationship and how AI may be used in them, (ii) which categories of AI systems and GPAI models regulated under the AI Act are relevant in an employment context, and (iii) the risks associated with the use of AI in the workplace.

     1. How is AI used in the workplace?

AI can and will be used in all phases of an employment relationship: 

  • In the application phase, recruiters can e.g. enter the requirements of a new position into an AI-supported search engine to find suitable candidates. The search engine searches job portals based on the prescribed requirements, filters suitable candidates and determines the likelihood of a job change. In the event of multiple job applications, a pre-selection can be made using so-called people analytics applications. For example, AI-supported analysis tools can automatically scan and sort a large number of applications according to the requirements of a position. Technically, some applicants can be automatically rejected (but its legal permissibility must be assessed on a case-by-case basis). To evaluate individual applications, programmes can be used to draw conclusions about applicant’s personalities and characteristics based on text, video and language analysis. Further, AI can contribute in drafting, negotiating and signing employment contracts. 
     
  • During an employment relationship, AI can be used in particular for issuing instructions, i.e. exercising the employer’s direction right. It can further be used in the context of performance reviews and target assessments, as well as for risk prevention purposes (compliance).
     
  • When terminating an employment relationship, preparatory measures, implementation measures and related issues come into consideration: e.g. preparation of termination letters or termination agreements, preparation of termination decisions, or automatic scoring table for social selection.

     2. Risk categories under the AI Act

As explained in earlier posts (please see here), the AI Act follows a risk-based approach and introduces different risk categories for AI systems and GPAI models. Accordingly, AI systems will be categorised into the following risk categories in the future:

  • AI systems with unacceptable risk
  • AI systems with high risks
  • AI systems with special transparency obligations
  • AI systems with minimal risk.

In principle, the higher the risk, the stricter the obligations imposed on the providers and deployers of such systems. 

Prohibited: AI systems with unacceptable risk

Article 5 of the AI Act provides that “AI systems with unacceptable risk” (defined as systems that are suspected of violating fundamental rights enshrined in the Charter of the European Union) will be completely banned. 

In an employment context, this category includes, in particular, systems designed to identify the emotions of employees in the workplace (unless they serve medical or safety purposes, such as monitoring a pilot’s fatigue), as well as those that categorise individual natural persons on the basis of biometric data in order to deduce their ethnic origin, political opinions, trade union membership, religious or philosophical beliefs, or sexual orientation.

AI systems with high risk

The “high-risk” category will include AI systems that could potentially have a detrimental impact on the health, safety and fundamental rights of individuals. 

In an employment context, this will include the following AI systems:

  • AI systems intended to be used for the recruitment or selection of natural persons, in particular to place targeted job advertisements, analyse and filter job applications and evaluate candidates; and
     
  • AI systems intended to be used to make decisions affecting terms of work-related relationships, the promotion or terminations of work-related contractual relationships, to allocate tasks based on individual behaviour or personal traits or characteristics or to monitor and evaluate the performance and behaviour of persons in such relationships. 

This covers many, but by no means all, fields of use for AI in the workplace. For example, the high-risk category does not include AI systems for approving holiday requests, language assistance and translation programmes or AI-based training measures. However, the use of AI in the area of disciplinary authority is likely to be covered in many cases.

Article 6(3) of the AI Act provides an exception in the case of high-risk systems which do not pose a significant risk of harm to the health, safety or fundamental rights of natural persons, including by not materially influencing the outcome of decision-making, in relation to subordinate support activities. 

The use of high-risk AI systems will be permitted in principle. However, it will be subject to additional obligations, depending on whether employers are categorised as “providers” or “deployers”. We will provide further insights in this respect in another blog post soon. 

     3. Risks associated with AI systems in an employment context

The use of AI systems in the employment context, and particularly in the employment contract formation, is associated with a number of legal challenges, which inter alia include: 

  • Discrimination: Certain uses of AI may result in discrimination against certain groups of workers based on their protected characteristics. AI systems used for recruitment, performance management or other decision-making processes may be based on data that is biased, incomplete or outdated and may therefore produce results that are unfair, inaccurate or inconsistent. This may require, among other things,  reviewing and, if necessary, improving the algorithms or data sets. Breaches may give rise to claims for damages and compensation, reputational risks, and the invalidation of decisions or actions based on AI-enabled systems.
     
  • Impact on diversity: The use of AI for HR tools, such as recruitment, performance evaluation, promotion or dismissal, could have a negative impact on diversity, if the AI system is based on biased data and makes decisions about the workforce on that basis.
     
  • Collective obligations and employee activism/unrest: Employers should consider any applicable collective obligations when implementing AI in the workplace. We will provide more information on this aspect in another blog post soon. Also, there is potential for employee activism and/or unrest if the implementation of AI in the workplace is mishandled or in response to fears around AI generally.
     
  • Data privacy and automated decision-making: Employers should exercise caution before sharing data with AI applications, in order to protect employee data and ensure compliance with data privacy laws and other AI specific laws. 

    In particular, where AI systems are used to make decisions either on the recruitment of a candidate by analysing their CV, their facial expressions or voice during an audiovisual interview, or on the performance of an employee or worker by analysing their behaviour, the GDPR provides for strict limitations. The candidate or the employee has the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her. There is room for exceptions where automated decision-making is necessary for entering into, or performing, a contract between the data subject and a data controller, where national law so permits, or where the data subject has given his or her consent on this. In practice, this means that (outside the scope of the exceptions) employers using AI systems to support decision-making processes should provide for (the possibility of) a meaningful human intervention in such a process.  

In our next blog, we will take a closer look at the protection of personal data under the AI Act.

Tags

ai, employment, eu ai act, eu ai act series, gdpr