What is the difference between a high-risk AI system, certain AI systems with transparency risks, general-purpose AI models and general-purpose AI models with systemic risk?
In the first post of our EU AI Act unpacked blog series, we looked at how the EU Artificial Intelligence Act (AI Act) defines AI systems and General Purpose AI (GPAI) models. In this post, we take a closer look at how the different categories of AI systems and GPAI models are regulated under the AI Act and give a brief overview of the corresponding compliance requirements for organisations.
Risk categories under the AI Act
As explained in our previous post, the AI Act follows a risk-based approach and introduces different risk categories for AI systems and GPAI models. In particular, the AI Act aims to protect against risks to the health, safety or fundamental rights, the rule of law and environmental protection, protection against the harmful effects of AI systems in the Union, while also supporting innovation.
Type | Categories |
AI systems | Divided into three different categories:
|
General-purpose AI (GPAI) models | For GPAI models, the AI Act foresees a tiered approach depending on whether it is a GPAI model with systemic risk or a “normal” GPAI model. |
Prohibited AI practices posing an unacceptable risk
Organisations may not utilise AI practices which pose an unacceptable risk for people’s safety and their fundamental rights enshrined in the Charter of Fundamental Rights of the EU to protect dignity, freedoms, equality, solidarity, citizen’s rights and justice.
In a nutshell, the following AI practices are generally prohibited under the AI Act:
- Subliminal manipulation
- Exploitation of vulnerabilities
- Social scoring
- Assessing the likelihood of a person to commit a crime based on profiling
- Facial recognition databases through untargeted scraping
- Emotion recognition in the workplace and education institutions
- Biometric categorisation systems
- Real-time remote biometric identification in publicly accessible spaces for the purpose of law enforcement
However, some of these AI practices are allowed under specific and narrow exemptions and others have vague descriptions, so the devil is in the detail. In particular, in light of the possible heavy fines under the AI Act, this requires organisations to carefully analyse whether their AI system could fall under one of these prohibited practices before launching it.
High-risk AI systems
High-risk AI systems are subject to strict compliance obligations. A thorough AI risk and compliance management system is required within an organisation providing or deploying such systems, but also for other stakeholders along the value chain.
There are two groups of high-risk AI systems:
(i) AI systems are considered high-risk if they are safety components of products covered by sectorial EU product safety law and when subject to a third-party conformity assessment regarding requirements under that sectorial EU product safety law, eg Regulation (EU) 2017/745 on medical devices or Regulation (EU) No 167/2013 on agricultural and forestry vehicles (Article 6(1) AI Act). A safety component is defined as part of a product where the failure endangers the health and safety of a person or property.
(ii) Additionally, AI systems are considered high-risk if they fall under a use case covered in one of the eight areas set out in Annex III of the AI Act (Article 6(2) AI Act):
- Biometrics
- Critical infrastructure
- Education and vocational training
- Employment, workers management and access to self-employment
- Access to and enjoyment of essential private services and essential public services and benefits
- Law enforcement
- Migration, asylum and border control management
- Administration of justice and democratic processes
As an exemption to the above, a provider of an AI system that qualifies as high-risk according to Article 6(2) AI Act can conduct and document a risk assessment for demonstrating that the AI system does not pose a significant risk of harm to the health, safety or fundamental rights of natural persons, ie that the AI system should not qualify as high-risk. The AI Act provides for a limited list of relevant use cases, including that the AI system does not materially influence the outcome of decision making (Article 6(3) and (4) AI Act). This risk assessment must have been conducted and documented before the AI system is placed on the market or put into service. In case one of the exemptions applies, the provider is required to register the AI system in an EU database maintained by the Commission. However, AI systems that perform profiling do not benefit from this exemption even if one of the criteria set out in Article 6(3) AI Act is met.
Organisations possibly developing a high-risk AI system should conduct and document such risk assessment early on. If they cannot benefit from this exemption and consequently develop an AI system qualifying as high-risk, they must fulfil a large number of compliance requirements that affect the design of the high-risk AI system, including:
- implementing a risk management system (Article 9 AI Act) and an overall quality management system (Article 17 AI Act);
- ensuring data and data governance requirements (Article 10 AI Act);
- drawing up an extensive technical documentation (Article 11, Annex IV AI Act);
- allowing for automated recoding of event logs (Article 12 AI Act);
- providing usage instructions for deployers (Article 13 AI Act), including tools for appropriate human oversight (Article 14 AI Act); and
- ensuring accuracy, robustness and cybersecurity of the high-risk AI system (Article 15 AI Act).
Deployers of a high-risk AI system must, i.a.,
- ensure the correct use of the AI system, including monitoring (Article 26(1) and (5) AI Act);
- assign human oversight (Article 26(2) AI Act) and, where they have control;
- ensure the quality of the input data (Article 26(4) AI Act).
For some high-risk AI systems, deployers also have to conduct a fundamental rights impact assessment (Article 27 AI Act).
Importers must ensure the conformity of the high-risk AI system (Article 23(1) AI Act) while distributors must ensure that the high-risk AI system bears the required CE marking (Article 24(1) AI Act).
Additionally, any distributor, importer, deployer or other third-party will be considered a provider of a high-risk AI system, entailing all obligations, if they put their name or trademark on a high-risk AI system, make substantial changes to or modify the intended purpose of a high-risk AI system (Article 25(1) AI Act).
Certain AI systems with transparency risk
Providers and deployers of AI systems considered by the AI Act as posing transparency risks are subject to additional transparency requirements (exemptions can apply, among other things, in some cases if the AI system is used to detect, prevent, investigate or prosecute criminal offences). This generally covers all generative AI systems:
- AI systems that intend to interact directly with natural persons.
- AI systems, including AI systems based on a GPAI model, that generate synthetic audio, image, video or text content.
- AI systems that generate or manipulate image, audio or video content constituting a deep fake.
- Emotion recognition systems or biometric categorisation systems.
- AI systems that generate or manipulate text which is published with the purpose of informing the public on matters of public interest.
GPAI models with and without systemic risk
GPAI models are subject to a range of obligations fostering technological deployment and ensuring adequate safeguards, including the provision of detailed technical documentation to the competent authorities (Annex XI AI Act), the provision of information to downstream providers, the implementation of policies to protect copyright and the publication of a summary of the content used for training the GPAI model (Article 53(1) AI Act). Providers that release GPAI models under a free and open-source licence are subject to certain exemptions of these obligations (Article 53(2) AI Act).
GPAI models with systemic risk are subject to additional obligations. GPAI models are considered to have a systemic risk if they have high impact capabilities, eg if they have great computing power (currently when the computation used for its training is greater than 10^25 FLOPS and subject to future amendments by the Commission). Furthermore, GPAI models can be classified as having systemic risk in case of a decision of the Commission (either ex officio or following a qualified alert from a scientific panel of independent experts). The provider of such GPAI model needs to
- perform model evaluation in accordance with standardised protocols;
- conduct systemic risk assessments and mitigate systemic risks;
- report incidents to authorities; and
- ensure adequate cybersecurity protection, including the physical infrastructure of the model (Article 55(1) AI Act).
Key takeaways
- The AI Act follows a risk-based approach taking into account the risks of AI to natural persons. The AI Act therefore distinguishes between prohibited AI practices, high-risk AI systems, certain AI systems with transparency risk and GPAI models with and without a systemic risk.
- For determining whether and which obligations under the AI Act apply, organisations need to assess how their use of AI is qualified under the AI Act and in which role along the AI value chain they act.
- Before placing an AI system on the market, putting it into service, deploying, distributing, importing or otherwise using it, it must be carefully ruled out that such system does not entail an “unacceptable risk” within the meaning of the AI Act.
- The AI Act provides for an extensive set of obligations regarding high-risk AI systems.
- Providers of AI systems generally falling under one of the high-risk areas can assess and document whether the AI system falls under one of the limited exceptions in Article 6(3) AI Act. If so, they must register the AI system in an EU database and can thereby avoid the strict compliance obligations for high-risk AI systems under the AI Act.
- Providers of AI systems posing transparency risks, especially generative AI systems like chatbots, must comply with specific transparency requirements.
- The obligations of providers of GPAI models focus on technical documentation and increase if the GPAI model has systemic risk.
In our next blog, we'll take a closer look at the personal and territorial scope of the AI Act.