This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.

Freshfields TQ

Technology quotient - the ability of an individual, team or organization to harness the power of technology

| 5 minutes read

EU AI Act moves towards the final stages – Major take-aways from Parliament’s vote

More than two years after the adoption of the European Commission’s proposal on an AI Act (see our briefing here), the European Parliament (Parliament) voted in favour of its position today.

Discussions in the Parliament had been particularly protracted, with initial disagreements over which MEPs should be leading the work and the release of a popular generative AI chatbot at the end of 2022 igniting a debate on whether the AI Act should include specific rules for general purpose and generative AI. These now came to a conclusion in today’s vote.

At its core, the AI Act proposes a sliding scale of rules based on risk: the higher the perceived risk, the stricter the rules. AI systems with an ‘unacceptable level of risk’ would be strictly prohibited and those considered as ‘high-risk’ would be subject to the most stringent obligations.

Definition of AI 

The centre piece of the AI Act is the definition of AI. Therefore, it’s not surprising that this was also a focus of the discussions around the Parliament’s position. Similar to the European Council (Council), which adopted its position in December, the Parliament’s position seeks greater alignment with international definitions, particularly the work undertaken by the OECD and NIST, with a focus on the concept of “autonomy” and “machine-based systems”.

As set out in today’s press release, the Parliament’s aim is to ensure a uniform definition for AI which is “designed to be technology-neutral, so that it can apply to the AI systems of today and tomorrow.” That said, the Parliament’s position also argues that “comparably simpler techniques such as knowledge-based approaches, Bayesian estimation or decision-trees may also lead to legal gaps that need to be addressed by this Regulation, in particular when they are used in combination with machine learning approaches in hybrid systems”. This suggests that more traditional models also have the potential of being captured within the scope of the AI Act.

Prohibited practices

As mentioned above, the proposal considers that certain AI systems entail ‘an unacceptable risk’. These practices shall be prohibited per se, e.g. AI systems using subliminal, purposefully manipulative or deceptive techniques.

The Parliament’s position proposes expanding the list of prohibited practices to include bans on ‘intrusive and discriminatory’ uses of AI systems including:

  • ‘real-time’ remote biometric identification systems in publicly accessible spaces;
  • biometric categorisation according to sensitive or protected attributes or characteristics of a person like gender, race, religion, and
  • AI systems allowing interference in emotions in the areas of law enforcement, border management, workplace and education institutions.

Certain prohibitions are, in addition, subject to the condition that the AI system must (likely) cause significant harm.

High-risk practices and extra-layer for qualification

AI systems with a ‘high risk of causing harm’ would be subject to the most stringent obligations under the AI Act. The Commission and the Council suggested categorising high-risk systems based on certain critical areas or use cases and listing those in an Annex. The Parliament now suggested an additional test, namely that the systems falling under one of the critical areas/use cases listed in the Annex must also pose a significant risk of harm to the health, safety or fundamental rights of a natural person to qualify as ‘high-risk’. Companies would be required to conduct an assessment, on the basis of guidance (to be prepared by the Commission) and submit this self-assessment to a national or the AI office, a central body which would be tasked to provide guidance and support enforcement of the Act. This would lead to more flexibility but also to legal uncertainty for companies assessing whether the AI Act may cover their AI products.

The Parliament has further proposed adding a couple of new systems to the high-risk category. For example, AI systems for influencing the outcome of an election or a voting behaviour are now also on the list. Important to note for Very Large Online Platforms regulated under the EU Digital Services Act is the inclusion of their recommender systems intended to be used for social media.

General Purpose AI, Generative AI and Foundation Models

Originally, the AI Act didn’t cover AI systems without a specific purpose. This has changed with the emergence of Generative AI systems that can produce new content such as text, images or sounds based on existing data and can be used in a variety of ways. To cover this gap, the Council subsequently proposed introducing a category of ‘General Purpose AI’, meaning AI systems that have a wide range of possible uses not foreseen in advance and which may be integrated in a plurality of other AI systems. The Council envisaged to apply a lighter set of obligations to General Purpose AI used in high-risk systems than for high-risk systems itself. The specific requirements would have been specified by implementing acts from the Commission.

The Parliament suggests now that General Purpose AI providers do not need to fulfil high-risk requirements. Instead, the Parliament has proposed requiring these providers to undergo certain testing and they must maintain specific system documentation.

In addition, the Parliament suggests explicitly defining Generative AI systems and classifying them as a sub-category of Foundation Models – the latter being another new category suggested by the Parliament which would be defined as “AI system models that are trained on broad data at scale, are designed for generality of output, and can be adapted to a wide range of distinctive tasks”. Foundation Models would fall under a stricter regime and would have to comply with a similar set of obligations as high-risk systems. In addition, Generative AI systems would have to assure compliance with an extra set of obligations, such as transparency rules, the obligation to train models in a way that safeguards against the generation of content in breach of EU law, copyright rules, and potential misuse. Fines for Foundation Model providers for breaching the AI rules may amount up to €‎10 million or 2% annual turnover, whichever is higher.

Holistic approach 

Whilst links with other regulations and alignment with other areas of law were already present in the initial Commission proposal, the Parliament’s position seems to strengthen this holistic approach, for example by referring to the role of employee representatives when putting into service high-risk AI systems at the workplace and by including references to the GDPR in specific provisions of the AI Act.

Outlook 

Following the overwhelming support received at committee level, the full Parliament in plenary still needs to vote in favour of the Parliament entering into interinstitutional negotiations (“trilogues”) on this basis under the Spanish Presidency of the Council. This vote is expected to take place during the week of 12 June.

Despite being the main digital priority for Spain under its Council Presidency, landing a compromise on this major file will not be an easy task. In fact, while several provisions of the co-legislators' positions seem close, there are key aspects of the file in which the position of the European Parliament is slightly different than the Member States’ position, such as in biometrics, the list of prohibited practices, how to prove that high-risk AI systems pose a significant risk, as well as how to deal with foundation models.

A final agreement on the AI Act is expected to be concluded by the end of 2023, but if the Spanish Presidency of the Council does not manage to find an agreement, it is certainly expected that Belgium will aim to close the AI Act at the beginning of 2024.

Tags

ai, eu digital strategy, eu ai act, eu digital services act