This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.

Freshfields TQ

Technology quotient - the ability of an individual, team or organization to harness the power of technology

| 7 minutes read

Agreement on EU AI Act reached - what does it mean for businesses?

Introduction 

After more than two years of discussions and an intense final trilogue negotiation between the three EU institutions, a political agreement on the AI Act was reached on 9 December 2023. 

This means that the cornerstones of the AI Act are now agreed but the legislative process is not over yet. Further technical negotiations on the final wording of the law will take place in the coming months. This blog provides a short summary of the main points agreed, what to expect next and what organisations implementing AI solutions should do now to stay compliant.

Definition of AI 

The definition of AI systems is key to defining the scope of the AI Act. Aligning with international standards has always been an important goal for EU policy makers and, as such, they based the final definition of AI systems on the recently updated version from the OECD:

‘An AI system is a machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.' 

To provide greater legal certainty on how to interpret this broad definition of an AI system, we expect further technical work in the coming months on the final wording of the Recitals to the AI Act. 

Classification of High-risk AI systems and prohibited applications

At its core, the AI Act proposes a sliding scale of rules based on risk: the higher the perceived risk, the stricter the rules. AI systems with an ‘unacceptable level of risk’ will be strictly prohibited and those considered as ‘high-risk’ will be subject to the most stringent obligations. ‘Limited-risk’ AI systems such as chatbots on websites must meet lighter obligations mainly consisting of transparency requirements. The AI Act allows the free use of minimal-or no-risk AI systems. The latter includes applications such as AI-enabled video games or spam filters.

  • Prohibited AI uses

The AI Act bans certain uses of AI that pose unacceptable risk to citizens’ rights and democracy. The deal struck by EU policymakers prohibits biometric categorisation systems that use sensitive characteristics (e.g. political belief, race), untargeted scraping of facial images from the internet or CCTV footage, emotion recognition in the workplace and educational institutions, social scoring, AI systems that manipulate human behaviour to circumvent their free will, AI used to exploit the vulnerabilities of people (due to their age, disability, social or economic situation) and some cases of predictive policing for individuals.

Real-time biometric remote identification in public spaces was one of the most debated issues of the list. The final compromise agreement reached includes certain exceptions where AI systems will be allowed for law enforcement e.g. to prevent terrorist attacks.

  • High-risk AI systems

The final list of high-risk AI systems closely follows the list proposed by the co-legislators during the legislative process. However, EU policymakers have allowed exemptions for some AI systems from the high-risk regime via a self-assessment. This is the case for those AI systems that do not pose a significant risk of harm to the health, safety, or fundamental rights of natural persons – including by not materially influencing the outcome of decision-making. One of four possible cases must be met for an exemption to apply: the AI system is intended to (1) perform a narrow procedural task, (2) improve the result of a previously completed human activity​, (3) detect decision-making patterns or deviations from prior decision-making patterns and is not meant to replace or influence the previously completed human assessment, without proper human review, or (4) perform a preparatory task to an assessment relevant for the purpose of the use cases listed in the Annex that is determining the high-risk use cases. However, AI systems listed as high-risk that perform profiling of natural persons will always be considered high-risk and cannot be exempted.

Among other requirements, Members of the European Parliament successfully managed to include a mandatory fundamental rights impact assessment for high-risk systems. AI systems that are considered high-risk will also need to comply with transparency, data governance and human oversight requirements.

Regulating General Purpose AI/ Generative AI/ Foundation models

Originally, the AI Act did not cover AI systems without a specific purpose. However, EU legislators felt that this led to a regulatory gap given the emergence of Generative AI systems that can produce new content such as text, images or sounds based on existing data, which can be used in a variety of ways. To cover this perceived gap, EU lawmakers had debated different regulatory approaches over the last months. After seven hours of discussions lawmakers reached an agreement introducing guardrails for Generative AI/foundation models under the umbrella of a new category of ‘General Purpose AI (GPAI) systems and models’. A tiered ​approach distinguishing between all GPAI models and models with potentially systemic risks – i.e. the most powerful models – has been agreed. Basic documentation and information-sharing obligations will apply to all models, including complying with EU copyright law and disseminating summaries about the content used in training. This can be supplemented – but not replaced as suggested by some policy makers – by codes of conduct. Additional obligations for systemic models include assessing systemic risks, conducting adversarial testing, reporting information about serious incidents to the European Commission and national authorities, ensuring cybersecurity protection, and environmental standards.

Under the AI Act, a GPAI model is presumed to be of systemic risk if it has reached a certain threshold of computational resources used in training. The European Commission has been empowered to adopt delegated acts to amend the thresholds and include other criteria considering technological developments. No separate rules on fines for foundation models are foreseen, these will be subject to the same fine regimes as high-risk AI systems (see below).

Governance and fines 

Another area of friction among EU policymakers was how to address governance and enforcement of the new rules. The political agreement foresees the establishment of an AI Office within the European Commission to oversee the most advanced GPAI models and enforce the common rules. A scientific panel of independent experts will advise the AI Office in this regard, and an AI Board composed of Member State representatives will act as a coordination platform supported by an advisory forum with stakeholders which will provide Member States with technical expertise.

Fines will vary depending on the type of infringement and are set up at a predetermined sum or based on a percentage of the company’s global annual turnover in the previous financial year (whichever is higher). For infringing the rules on prohibited practices, companies may be subject to fines of up to EUR 35 million or 7% of their global annual turnover. For infringement of the general obligations set out by the AI Act, the fines may be up to EUR 15 million or 3%. If companies supply incorrect information, fines may be up to EUR 7.5 million or 1.5%. In addition, the political agreement envisages more proportionate caps on administrative fines for SMEs and start-ups.

Legislative outlook

Following the political agreement reached by EU policymakers on 9 December 2023, technical negotiations will take place in the coming months to refine the provisions agreed politically, and certain aspects of the legal text that have been subject to technical negotiations and have not been agreed yet. 

In fact, EU policymakers have already started meeting in the week commencing 11 December to iron out the details of the legal text. We expect technical negotiations to last until 9 February 2024. Once a final AI Act text is reached it will need to be formally approved by the Parliament and Council before the end of this legislative mandate (April 2024). 

Since the last EU Parliament plenary takes place at the end of April 2024, a joint IMCO-LIBE vote would need to take place before March 2024. In terms of implementation, we expect the AI Act to start applying in Q2 / Q3 2026. 

Action items for businesses 

Although there is no final wording for the AI Act as yet, the direction of travel is very clear and organisations that are developing, using, or implementing AI solutions are well advised to start their regulatory preparation projects now. Actions to consider include:

  • Scoping AI use cases and your role: Review whether your planned or implemented AI use cases fall under one of the regulated categories under the AI Act, specifically under the high-risk or the GPAI category. If this is the case, determine your role in this regard as the most onerous compliance obligations will lie with the providers of AI systems whereas deployers/users and importers will have lighter compliance obligations. The determination of your role within the AI value chain might not always be clear cut and the final wording of the AI Act is expected to bring more clarity in this regard. In the meantime, identifying and documenting your role now will help for future accountability requests from customers, suppliers, or authorities.
     
  • Transparency obligations: The AI Act provides for transparency obligations at different levels of AI systems, including for the use of limited-risk AI system such as chatbots on websites which need to be labelled accordingly to notify website visitors about the use of AI, ranging to disseminating summaries about the content used for training GPAI systems and models. Moreover, transparency principles about the use of AI with consumers are already in force in most EU members states based on consumer protection and unfair competition law principles. Thus, a general exercise of reviewing whether you have put in place transparency information on the use of AI in customer-oriented systems is worthwhile. 
     
  • AI Governance: Start implementing an AI Governance structure within your organisation that will support you operationalising the upcoming regulatory requirements. Bring together all relevant stakeholders from your legal, business, technical and risk functions. A multi-disciplinary approach may help you to implement the regulatory requirements from the start into your production cycle as well as into your internal and external services and offerings.

Tags

ai, eu digital strategy, eu ai act