This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.

Freshfields TQ

Technology quotient - the ability of an individual, team or organization to harness the power of technology

| 4 minute read

EU AI Act unpacked #7: Use of GPAI models along the AI value chain

In the previous post of our blog series, we explained what a ‘fundamental rights impact assessment’ is under the EU AI Act (AI Act). In this blog post, we take a closer look at how the AI Act regulates the commercial use of general purpose AI (GPAI) models and systems along the AI value chain.

GPAI model vs. GPAI system

Article 3(63) AI Act defines the term ‘GPAI model’ as an 

AI model, including when trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable to competently perform a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications.” 

A prominent example for such GPAI models are so-called large language models (LLMs). GPAI models can be placed on the market in various ways, including through libraries, application programming interfaces (APIs), as direct download, or as physical copy.

GPAI models are typically used as the technical foundation of various AI systems such as chatbots or synthetic image and video generators. In particular, they require the addition of further components, such as a user interface, to become AI systems. When fitted with a user interface and placed on the market for non-high-risk purposes, they usually become a ‘GPAI system’ meaning an AI system which is based on a GPAI model and which has the capability to serve a variety of purposes, both for direct use as well as for integration in other AI systems (Article 3(66) AI Act).

GPAI models can either be utilised by the same provider or by providers that integrate a third-party or open-source GPAI model into their own AI systems, so-called ‘downstream providers’ and ‘downstream AI systems’. In this context, the AI Act provides only for a few specific rules regulating the commercial use of GPAI models along the value chain.

Obligations for GPAI model providers 

The AI Act highlights the importance of GPAI model providers for the AI value chain as their models may form the basis for a wide range of downstream systems (see Recital 101 AI Act). Concretely, this means that GPAI model providers are, inter alia, required to draw up, keep up-to-date and make available information and documentation to providers that have integrated the GPAI model into their downstream AI systems (Article 53(1)(b) AI Act). 

Such information and documentation must enable the downstream providers to have a good understanding of the capabilities and limitations of the GPAI model and to comply with their own obligations under the AI Act. It must contain at least the elements set out in Annex XII of the AI Act, including a general description of the model and a description of the process for its development. However, this obligation does not apply to non-systemic risk GPAI models placed on the market under free and open-source licenses that allow for the access, usage, modification, and distribution of the model and whose parameters are made publicly available (Article 53(2) AI Act).

Consequences of modifying or fine-tuning existing GPAI models

Even though the AI Act acknowledges that GPAI models may be further modified or fine-tuned into new models (see Recital 97 AI Act), there are no specific obligations for actors using existing third-party or open-source GPAI models. 

The AI Act does not specify the conditions under which the modification or fine-tuning of an existing GPAI model leads to the creation of a new GPAI model, possibly resulting in a change of role for the party fine-tuning the model (ie becoming the responsible model provider under the AI Act). For high-risk AI systems, however, the AI Act stipulates that any third party that makes a “substantial modification” to an existing high-risk AI system will be considered a provider subject to the applicable provider obligations (Article 25(1)(b) AI Act). Even if not directly applicable to GPAI models, this provision at least suggests that not every minor technical modification to an existing GPAI model would trigger a change of responsibilities. 

Whether the threshold for a ‘new’ GPAI model has been reached as a result of modifying or fine-tuning an existing model must therefore likely be decided on a case-by-case basis. For example, a decisive criterion could be the technical and organisational effort incurred by the modification or fine-tuning or the question of whether the process changes the GPAI model’s technical specifications laid down in its documentation. 

Nevertheless, modifying an existing non-high-risk GPAI system in such a way that it becomes a high-risk AI system will trigger a change of responsibilities, ie the modifying party will become the responsible provider for that new high-risk AI system (Article 25(1)(c) AI Act). This could, for example, be the case if a chatbot is put into service by a third-party actor for specific high-risk use cases (eg for recruitment or credit scoring purposes). The initial provider must then closely cooperate with the new provider by providing the necessary information and reasonably expected technical access required for the fulfilment of the obligations for providers of high-risk AI systems unless the initial provider has clearly specified that its AI system is not to be changed into a high-risk AI system (Article 25(2) AI Act).

Key takeaways

  • Fine-tuning an existing GPAI model may constitute the development of a new model potentially triggering the GPAI model provider obligations under the AI Act;
  • Modifying a third-party GPAI system (eg a chatbot) into a high-risk AI system may trigger the respective high-risk AI system provider obligations; and
  • A change of responsibilities may trigger an information and cooperation obligation for the initial provider unless this provider has clearly excluded the modification into a high-risk AI system.

Further, due to the remaining legal uncertainties regarding the applicable roles and statutory obligations, providers and commercial users of GPAI models and systems should consider to precisely specify the responsibilities of the parties under the AI Act in their underlying contractual agreements.

What’s next?

In our next blog post, we will focus on the question of who will enforce the obligations under the AI Act. In particular, we will take a look at the role and setup of the newly introduced AI Office.
 

Tags

ai, eu ai act, eu ai act series