This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.

Freshfields TQ

Technology quotient - the ability of an individual, team or organization to harness the power of technology

| 4 minute read

EU AI Act unpacked #1: AI systems and GPAI models

The EU Artificial Intelligence Act (AI Act) passed parliament in March 2024 and will likely be published in the EU Official Journal in June 2024. To assist businesses in navigating this groundbreaking legislation, we’ve launched our EU AI Act unpacked blog series. Over the next few weeks, we’ll take a look at a few of the most interesting topics within the Act. In these blogs, we’ll also provide practical tips on the actions your organisation should take to ensure compliance with this transformative regulation.

In our first post we explore what exactly AI systems and GPAI models are and how they are defined under the AI Act.

Overview

On Wednesday, 13 March 2024, the European Parliament approved the EU Artificial Intelligence Act (AI Act) which is the world's first comprehensive legal framework for the regulation of AI and will have significant influence on the further development of AI regulation worldwide.

The AI Act pursues a risk-based approach and distinguishes between:

TypeCategories
AI systems

Divided into three different categories:

  1. Prohibited AI practices,
  2. High-risk AI systems, and
  3. Certain AI systems (with transparency risk)
General-purpose AI (GPAI) modelsFor GPAI models, there is also a tiered approach depending on whether it is a GPAI model with systemic risk or a “normal” GPAI model.

 

But what is an AI system or GPAI model? In this blog post, we take a look at this quite technical question.

AI system

AI systems are central to the AI Act. The AI Act defines an AI system in Article 3 (1) AI Act as:

[1] machine-based system [2] that is designed to operate with varying levels of autonomy and that [3] may exhibit adaptiveness after deployment, and that, [4] for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions [5] that can influence physical or virtual environments.”  [numbering has been added by us]

The definition is nearly identical with the 2023 updated OECD definition of an AI system (see also OECD’s explanatory memorandum) which was intended by the European legislator to ensure legal certainty and facilitate international convergence and wide acceptance.

All five requirements [1] to [5] must be met simultaneously. The rather vague requirements are further specified in the recitals which explain that AI systems are distinguished from simpler traditional software by the fact that they are not rule-based systems where the rules are solely defined by humans but instead by their capability to infer and to derive models or algorithms from inputs or data which include machine learning (requirement [4]). In addition, it is required that an AI system has some degree of independence of actions from human involvement and capabilities to operate without human intervention (requirement [2]), and that the system has self-learning capabilities, allowing it to change while in use (requirement [3]).

Due to the several requirements of an AI system under the AI Act, providers and deployers of software using AI should diligently check if all criteria are fulfilled and compliance obligations are applicable. Not all software using AI does fulfil the notion of AI system and therefore an individual assessment is required.

General-purpose AI models (GPAI models)

Interestingly, the AI Act does not define what an AI model is but only what a GPAI model is. That is defined in Article 3(63) AI Act as

[1] an AI model, including where such an AI model is trained with a large amount of data using self-supervision at scale, [2] that displays significant generality and [3] is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and [4] that can be integrated into a variety of downstream systems or applications, [5] except AI models that are used for research, development or prototyping activities before they are placed on the market.” [numbering has been added by us]

The recitals clarify that the notion of GPAI models should set them clearly apart from the notion of AI systems to enable legal certainty. A key criterion is the generality and the capability to competently perform a wide range of distinct tasks. Such models are typically trained on large amounts of data and be placed on the market in various ways. 

It is important to understand that an AI model, including a GPAI model, can be an essential part of an AI system (in the definition referred to as “downstream systems”), but does not constitute such system on its own. Probably the most prominent example for GPAI models are large language models (LLMs) which form the basis of popular chat tools. 

Providers should note that in cases where AI models are integrated into AI systems, the obligations of the AI Act for AI models do not cease but continue to apply.

Key takeaways

Understanding the type of AI

  • Providers of AI systems and models must differentiate between AI systems and GPAI models. AI systems are divided into three different categories resulting in different obligations and prohibitions, whereas GPAI models are a category of their own with independent regulation.
  • Not every software containing AI is considered an AI system.
  • An AI system typically integrates one or multiple AI models. In this case obligations for both, AI systems and GPAI models may apply simultaneously, potentially to different stakeholders along the AI value chain and to different degrees.

Provider responsibilities and user obligations

  • Depending on the risk category, providers and deployers of AI systems have (rigorous) compliance restrictions and are required to ensure transparency, accountability and safety.
  • Providers of GPAI models have distinct obligations whereas GPAI models with systemic risk have stricter compliance obligations.

In our next blog, we'll explore the various types of AI systems and discuss GPAI models in greater depth.

Tags

ai, eu ai act, eu ai act series