This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.

Freshfields TQ

Technology quotient - the ability of an individual, team or organization to harness the power of technology

| 9 minute read

EU AI Act unpacked #27: Guidelines on the scope of obligations for General-Purpose AI models

On 18 July 2025 the European Commission (EC) published its “Guidelines on the scope of obligations for general-purpose AI models”. While these guidelines are not binding, they set out the EC’s interpretation and application of the AI Act, on which the EC will base its enforcement action and therefore provide valuable insights. Notably, they follow a draft version released in April 2025 for public consultation. 

The final guidelines focus on four core topics: (1) the definition of general-purpose AI (GPAI) models, (2) a provider placing a GPAI model on the market, (3) the open-source exemptions to the obligations for GPAI models, and (4) some considerations on the enforcement of the GPAI model framework. 

1. What is a GPAI model?

The guidelines specify three points regarding GPAI models: (i) an indicative criterion when a model is considered to be a GPAI model, (ii) what the ‘lifecycle’ of a GPAI model is and (iii) when a GPAI model shows systemic risk.

Indicative criterion for identifying GPAI models

While the definition of a GPAI model according to Article 3(63) AI Act remains broad, in the guidelines, the EC now takes the approach of determining GPAI models by the amount of computational resources used to train the model and its capabilities.

Following those considerations, for a model to be considered a GPAI model it must be able to generate language, text-to-image or text-to-video, and the training compute must be greater than 1023 FLOP (floating-point operations). The draft guidelines had set a lower indicative threshold of 1022 FLOP for presuming a model is GPAI. With the final guidelines raising this threshold, fewer models will presumptively qualify as GPAI models than previously expected. 

As this criterion is only indicative, the EC highlights:

  • A model that meets the criterion but does not display generality or is not capable of competently performing a wider range of distinct tasks, is not a GPAI model. For instance, models limited to narrow tasks like speech transcription or image upscaling are excluded.
  • A model that does not meet the criterion may still be considered a GPAI model if it displays substantial generality and broad task competency.

Importantly, the capacity to generate language – whether text or speech – is viewed as a strong indicator of general-purpose capabilities, due to the central role language plays in reasoning, knowledge representation, and communication. Image or video-generating models may also qualify as GPAI models if they can support diverse visual tasks.

What is a lifecycle of a GPAI model?

Following Recital 114, 115 AI Act, providers of GPAI models with systemic risk must take appropriate measures to assess risks and ensure cybersecurity protection along the entire model lifecycle

The EC broadly defines the lifecycle as beginning with the first large pre-training run, and treating the model as the same throughout its whole development, market availability and use (unless significantly modified by another actor as detailed in part 2 below).

What is a GPAI model with systemic risk?

Article 51 AI Act determines a GPAI model to have systemic risk if it has high-impact capabilities, which is presumed when the cumulative amount of computation used for its training is greater than 1025 FLOP. The EC may issue delegated acts to revise the threshold or add benchmarks or indicators for assessing high-impact capabilities as technology evolves.

If a model is presumed to have high-impact capabilities, the provider must notify the EC of certain information (e.g., estimated amount of compute). This notification may even be necessary before training is completed, if it is reasonably foreseeable that a model will meet the thresholds. The EC may designate the GPAI model as one with systemic risk on its own initiative or following an alert from a scientific panel. The provider may also contest a classification based on the presumption or a designation by the EC with arguments which demonstrate that the model does not have high-impact capabilities.

2. Guidance on the definition of a provider placing a GPAI model on the market

The guidelines specify (i) when a company is considered to be a ‘provider’ of a GPAI model and (ii) when such model is to be considered as ‘placed on the market’, especially (iii) when it is integrated into an AI system.

Provider of a GPAI model

Following the definition of Article 3(3) AI Act, the EC provides some guidance on who should be regarded as provider of a GPAI model in different scenarios. Such examples include actors who develop a GPAI model (or have it developed) and place it on the market, even if the model is uploaded on a repository hosted by another party. If a GPAI model is developed for a consortium or collaborative by different actors and placed on the market, the provider is typically the coordinator of the consortium or collaborative. Alternatively the consortium or collaborative itself may be the provider – depending on the specific case. 

Notably, the EC considers a downstream modifier to become the provider if the modification leads to a significant change in the model’s generality, capabilities or systemic risk. A significant change is assumed when the training compute used for the modification is greater than a third of the training compute of the original model. If the downstream modifier does not know this value and cannot estimate it, the threshold should be replaced with a third of 1025 FLOP for GPAI models with systemic risks and a third of 1023 FLOP for GPAI models without systemic risks.

Placing a GPAI model on the market

Under Article 3(9) AI Act, a GPAI model is considered placed on the Union market when it is made available for the first time in the course of a commercial activity, whether for payment or free of charge. The EC offers some examples when a GPAI model is made available for the first time on the Union market:

  • via a software library or package;
  • via an API;
  • through a public catalogue, hub or repository;
  • as a physical copy;
  • via a cloud computing service;
  • by being copied onto a customer’s own infrastructure;
  • by being integrated into a chatbot which is accessible through a web interface;
  • via mobile application which is accessible through app stores; or
  • if it is used for internal processes that are essential for providing a product or service to third parties or that affect the rights of natural persons in the Union.

GPAI models integrated into AI systems

Additionally, a GPAI model may be deemed placed on the market according to the EC if: 

  • The provider integrates the model into its own AI system that is made available or put into service (Recital 97 AI Act).
  • An upstream actor supplies the model for the first time to a downstream actor in the Union, who integrates it into an AI system. In this case, the upstream actor is the provider of the GPAI model, while the downstream actor is the provider of the AI system.
  • An upstream actor supplies the model for the first time to a downstream actor outside the Union, who integrates it into an AI system and places that AI system on the Union market. The upstream actor is generally considered the provider of the GPAI model – unless the upstream actor has clearly excluded such use, in which case the downstream actor assumes the role of the provider of the GPAI model.

3. Open-source exemptions

Under the AI Act, providers of certain open-source GPAI models may be exempted from key obligations in Articles 53 and 54 AI Act, provided the model does not pose a systemic risk. The EC’s guidelines outline that to qualify for this exemption, a provider must meet stringent conditions across three distinct areas: (i) the conditions of the model’s license, (ii) the lack of monetisation, and (iii) the public availability of the model’s parameters and architecture.

License conditions

To qualify, a model must be released under a “free and open-source” license. The guidelines interpret this as a license granting users unrestricted rights to:

  • Access: Freely obtain the model without payment. Reasonable security measures like user verification are allowed, as long as they are not discriminatory.
  • Use: Use the model for any purpose. Licenses that limit use to "non-commercial" or "research-only" contexts, or that impose restrictions based on user scale (e.g., requiring payment above a certain number of users), are disqualified.
  • Modify: Adapt or fine-tune the model without restriction.
  • Redistribute: Share the model or any modified versions of it.

Notably, the guidelines permit licenses to include specific, proportionate, and non-discriminatory safety-oriented terms that restrict usage in applications or domains posing a significant risk to public safety, security or fundamental rights.

Lack of monetisation

The exemption only applies if the model is not monetised. The Commission interprets monetisation broadly and considers the following to be disqualifying practices:

  • Requiring direct or indirect payment for the model.
  • Employing a dual licensing model where, for example, academic use is for free but commercial use requires payment.
  • Bundling the model with mandatory paid services, such as support or maintenance.
  • Making the model exclusively available on a platform that requires payment for access, including platforms supported by paid advertising.
  • Requiring the processing of personal data for purposes other than what is strictly necessary for security.

However, offering purely optional paid services (e.g., premium tools, support, or advanced versions) alongside the freely available base model is acceptable according to the EC’s guidelines. 

Transparency of parameters

Finally, providers must make the model’s technical details publicly available to a degree that allows effective access, use, and modification. This includes disclosing:

  • the model’s parameters and weights;
  • information on the model’s architecture; and
  • comprehensive usage documentation, including the model’s capabilities, limitations, and the technical instructions (e.g., required infrastructure, configuration) needed for downstream providers to integrate it into their own AI systems.

4. Enforcement and compliance mechanisms

The guidelines provide some remarks on how the obligations of the AI Act will be enforced for GPAI models, (i) with a key role played by the GPAI code of practice. They also outline (ii) the AI Office’s supervisory mandate and (iii) the EC’s expectations for compliance following the entry into application of Chapter V of the AI Act on 2 August 2025.

GPAI Code of Practice

Providers of GPAI models may demonstrate compliance with the AI Act by adhering to a code of practice developed in collaboration with the AI Office once it has been assessed as adequate by the AI Office and the Board (for more information on the code of practice, see our blogpost #26 of the EU AI Act unpacked series).

Supervision by the AI Office

According to the EC’s guidelines, the AI Office will take a collaborative, staged and proportionate approach when supervising, investigating, enforcing and monitoring Chapter V of the AI Act. The guidelines suggest that the AI Office expects to work closely and informally with the providers of GPAI models, which includes proactive reporting of compliance measures and collaborative and active engagement with the AI Office. The AI Office will also consider, whether providers have implemented relevant technical standards, in particular harmonised standards published in the Official Journal of the European Union, if available (for more information on harmonise standards, see our blogpost #15 of the EU AI Act unpacked series).

Compliance timeline

The obligations for GPAI models officially apply from 2 August 2025, but the guidelines clarify certain transitional rules:

  • New models (placed on the market after 2 August 2025): These models must comply with the AI Act's obligations from day one. The AI Office acknowledges that providers may need time to adapt and encourages them to proactively communicate their compliance plans. 
  • Legacy models (placed on the market before 2 August 2025): Providers have a two-year grace period and must be fully compliant by 2 August 2027. Crucially, providers of these legacy models are not required to retrain them if doing so is technically impossible for past actions, if training data information is unavailable, or if it would impose a disproportionate burden. This must be clearly disclosed and justified.
  • Fines: While the obligations apply from 2 August 2025, the Commission's power to impose fines for non-compliance will only begin one year later, on 2 August 2026.

5. Conclusion

The EC’s guidelines on the scope of GPAI model obligations under Chapter V of the AI Act are a crucial next step in the implementation process of the governance of AI in Europe. In addition, the EC also published a template for the public summary of training content for general-purpose AI models on 24 July 2025, which specifies the information providers must make publicly available under Article 53(1)(d) AI Act. With the  GPAI code of practice (blogpost #26 of the EU AI Act unpacked series), the guidelines on the scope of GPAI model obligations and the template for the summary of the training data, providers now have three additional sources providing insights into the regulator’s understanding of the AI Act obligations for GPAI model providers. 

The guidelines in particular offer some valuable insight and aim to ease the implementation process. Whether they can achieve this goal, remains to be seen. Even though the EC and AI Office will not be able to issue fines before August 2026, the guidelines indicate that we will see increasing informal regulatory dialogue until then. It is therefore advisable that possible providers consider at an early stage whether and how they fall under the obligations of the AI Act.

Tags

ai, eu ai act, eu ai act series