This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.

Freshfields TQ

Technology quotient - the ability of an individual, team or organization to harness the power of technology

| 11 minute read

The Final General-Purpose AI Code of Practice - A Short Guide

The final version of the General-Purpose AI Code of Practice (GPAI CoP) was published on 10 July 2025. It follows three earlier iterations, published in November 2024, December 2024, and March 2025. Subject to assessment and approval by the European Commission and Member States, the 10 July version is presented as final ahead of the AI Act’s GPAI provisions becoming applicable on 2 August 2025. The GPAI CoP will be complemented by Commission guidelines on key concepts related to GPAI models, to be published still in July. The GPAI CoP is a voluntary tool facilitated by the AI Office and developed by a group of academics and industry stakeholders, designed to help GPAI model providers meet the respective requirements of the AI Act. 

The GPAI CoP consists of three separately authored Chapters: Transparency, Copyright, and Safety and Security. Below, we outline what each chapter commits signatories to, as well as the most notable changes introduced in this final iteration. In particular, the Transparency chapter now uses more cautious and less prescriptive language. The Copyright chapter also introduces sharpened drafting, with "best efforts" language replaced by firmer commitments throughout. The Safety and Security chapter underwent significant streamlining, with a reduced number of consolidated commitments. 

Chapter on Transparency

Article 53(1)(a) and (b) of the AI Act requires GPAI model providers to draw up and keep up-to-date the technical documentation of the model. The GPAI CoP chapter on transparency aims to help signatories to comply with these requirements. In addition, the GPAI CoP contains a model documentation form (Model Documentation Form) that can be used to document required information.

Main changes between Version 3 and the final version

  • Protection of confidential information: The final draft highlights the need to protect intellectual property rights, confidential business information and trade secrets when providing information that goes beyond the Model Documentation (Additional Information) to downstream providers. 
  • 14-day period to answer requests for Additional Information by downstream providers: Another novelty is that this additional information must be provided during a 14-day period, and only if it is necessary to help the downstream provider understand the GPAI model and comply with their obligations under the AI Act.

Updated list of measures

  • Model Documentation Form: The Model Documentation Form determines which information must be disclosed to which actors (AI Office, national competent authorities or downstream providers who intend to integrate the GPAI model into their AI systems) while protecting trade secrets and other confidential information of the GPAI model provider as well as third parties. The information is divided into general information, model properties, methods of distribution and licenses, (intended) use, training process, data used for training, testing and validation, computational resources during training and energy consumption during training and inference.
  • Draw up and keep up-to-date model documentation (Measure 1.1): Signatories must document all information referred to in the Model Documentation Form. However, the Model Documentation Form itself does not need to be used. The documented information must be updated and retained for a period of 10 years after the model has been placed on the market.
  • Provide relevant information (Measure 1.2): Signatories must publicly disclose contact information for the AI Office, national competent authorities and downstream providers to request access to the relevant information contained in the Model Documentation or other Additional Information. While the AI Office and national competent authorities must request the relevant information, to down-stream providers certain information must be provided proactively. Signatories are encouraged to consider whether the information contained in the Model Documentation Form can be disclosed publicly. Downstream providers may request Additional Information if they lay down how this is necessary to enable them to have a good understanding of the capabilities and limitations of the GPAI model relevant for its integration into their AI system and to comply with their obligations. After this additional request of the downstream provider, Signatories will need to provide this information no later than 14 days save for exceptional circumstances.
  • Ensuring quality, integrity and security of information (Measure 1.3): Lastly, Signatories will need to ensure that the documented information is controlled for quality and integrity, retained as evidence of compliance with the AI Act and protected from unintended alterations.

Chapter on Copyright

Article 53(1)(c) AI Act requires that model providers put in place a copyright policy to comply with EU copyright law. This is the basis for the Copyright Chapter of the GPAI CoP. While the overall substance of the measures remains consistent, the final GPAI CoP significantly raises the bar for compliance. Compared to the third draft, the final version introduces notable changes—refining and strengthening the commitments imposed on GPAI model providers. Most notably, in the final text, language anchoring measures as “best efforts” have been replaced by non-qualified binding commitments throughout the chapter.

Main changes between Version 3 and the final version

  • More authoritative: “Best efforts” language for hard commitments.
  • A single copyright policy: Compared to the third draft the final version clarifies that the copyright policy covers all GPAI models placed on the EU market in one single document.
  • New definition of “infringing websites”: The term “piracy domains” from the third draft has been removed. Instead, the GPAI CoP now refers to "websites [...] which are, at the time of web-crawling, recognized as persistently and repeatedly infringing copyright and related rights on a commercial scale”. The clarification “at the time of web-crawling” is new in the final draft.
  • Respecting rights reservations: Concerning the rights reservations by copyrightholders that signatories have to comply with, the GPAI CoP states that this also includes protocols adopted by international or European standardisation organisations, or protocols that are “state-of-the-art”, including “technically implementable”, and “widely adopted” by rightsholders.
  • Mitigating risk of infringing output: A new obligation for open source GPAI models has been added. They must be accompanied by documentation alerting users to the prohibition of copyright infringing uses of the model.

Updated list of Commitments

  • Copyright Policy (Measure 1.1): The first measure of the GPAI CoP requires GPAI model providers to draw up, keep up-to-date and implement a copyright policy. The final GPAI CoP clarifies that this copyright policy must cover all GPAI models placed on the EU market in one single document and incorporate the GPAI CoP measures. This may pose practical challenges for providers managing multiple model iterations.
  • Lawfully accessible content (Measure 1.2): GPAI model providers compiling data for text and data mining purposes commit to ensure that they only reproduce and extract lawfully accessible works and other protected subject matter. This includes committing to not circumvent effective technological measures as defined in Article 6 (3) of Directive (EC) 2001/29/EC. This obligation now refers not only to paywalls but also to technological denial or restriction of access imposed by subscription models. Second, Signatories commit to refrain from web-crawling “websites that make available to the public content and which are, at the time of web-crawling, recognised as persistently and repeatedly infringing copyright and related rights on a commercial scale by courts or public authorities in the European Union and the European Economic Area”. The level of authoritativeness of this prohibition has been sharpened, i.e. the “reasonable efforts” language has been removed. The clarification “at the time of web-crawling” is new in the final draft. As under the third draft, the measure states that a dynamic list of hyperlinks of these websites issued by relevant bodies will be published on an EU website.
  • Respecting rights reservations (Measure 1.3): The third measure mandates GPAI model providers to comply with machine-readable “state-of-the-art rights reservations (“opt-outs”) by copyright holders. This includes employing web-crawlers that follow robots.txt, but also to follow “other appropriate machine-readable protocols” including those which may yet be adopted by international or European standardisation organisations. This measure also continues to set out a transparency obligation vis-à-vis rightsholders enabling them to obtain information about web-crawlers, robots.txt and other measures adopted to comply with rights reservations. 
  • Mitigating the risk of infringing outputs (Measure 1.4):  Signatories commit  to mitigate the risk of copyright-infringing outputs. Instead of merely adopting “reasonable efforts” Signatories now commit to implementing appropriate and proportionate technical safeguards to prevent memorization. Moreover, the final GPAI CoP adds a new obligation for open source GPAI models. They must be accompanied by documentation alerting users to the prohibition of copyright infringing uses of the model.
  • Point of contact and complaint mechanism (Measure 1.5): The fifth measure obliges GPAI model providers to designate a point of contact for electronic communication with affected rightsholders and put a complaint mechanism in place. Also with regard to this measure the language has been sharpened. The new version adds the requirement to react to rightsholder complaints within a reasonable time, unless a complaint is manifestly unfounded or the Signatory has already responded to an identical complaint by the same rightsholder.

Chapter on Safety and Security

The Safety and Security chapter of the GPAI CoP aims to help signatories comply with Article 55(1) of the AI Act, i.e. the obligations for providers of GPAI models with systemic risk. Among all GPAI CoP chapters, it underwent the most visible changes in this final iteration, with commitments reduced from 16 to 10 in numbers. This reduction represents consolidation rather than removal of commitments, as well as relocation of technical content to annexes. Still, a number of substantive changes are worth noting.

Main changes between Version 3 and the final version

  • Security mitigation commitments are based on a self-defined Security Goal: Version 3 required signatories to implement security measures sufficient to meet the technical standard of “at least the RAND SL3 security goal or equivalent.” This benchmark, developed by RAND Corporation in a 2024 research report, describes the level of security needed to protect model weights against well-resourced non-state actors. The final version no longer sets RAND SL3 as a baseline. Signatories must define, justify, and meet a security goal appropriate to the model and relevant threat actors. The RAND SL3 standard now appears only as an optional reference point.
  • Retention period for technical documentation adjusted: The final version requires providers to retain technical documentation for at least ten years after the model is placed on the market. Version 3 required retention for twelve months after the model’s retirement. This ensures that documentation about models that are no longer operational remains available.
  • Retention period for serious incident records extended: The minimum retention period for documentation relating to a serious incident has been increased from three years to five years. This should be measured “from the date of the documentation or the date of the serious incident, whichever is later” so that relevant documents that predate an incident are also retained.
  • Publication of safety and security frameworks and model reports no longer required for “similarly safe or safer” models: Under Version 3, all signatories were expected to publish (e.g. via their websites) summarised versions of their safety and security framework and a model report (a report on the model and its systemic risk handling) for each model, subject to redactions where appropriate. The final version narrows this obligation by allowing publication to be omitted for models that are “similarly safe or safer” (i.e. models that present no greater systemic risk than a previously assessed model, based on comparable risk scenarios, benchmark performance, and model characteristics).

Updated list of Commitments

The final version sets out 10 commitments, each divided into specific “measures”, which signatories commit to implement.

  • Commitment 1: Safety and Security Framework (Measures 1.1 - 1.4): Signatories providing GPAI models with systemic risks commit to (i) creating, (ii) implementing, (iii) keeping updated, and (iv) notifying the AI Office of a state-of-the-art safety and security framework (the Framework). It outlines the systemic risk management processes and measures implemented to ensure that risks are brought to an acceptable level. This must include predefined conditions for targeted risk evaluations (“trigger points”), risk acceptance criteria, forecasts of future risk levels, allocation of responsibilities, and update procedures. The Framework must be formally approved no later than four weeks after a Signatory has notified the Commission that it provides a model that qualifies as a GPAI model with systemic risks under Art. 52(1) AI Act, and at least two weeks before placing the model on the market. The Framework must then be updated at least every 12 months, or sooner if its adequacy or adherence to it is materially undermined. The AI Office must be provided unredacted access to the Framework, and to any updates within five business days.
  • Commitment 2: Systemic Risk Identification (Measures 2.1 - 2.2): Signatories commit to identify systemic risks through a structured process. This must draw on information from various sources, including model-independent research, comparable models, post-market monitoring, incident data, and inputs from the AI Office or endorsed bodies. Signatories develop risk scenarios for each identified systemic risk to support subsequent analysis and acceptance determinations.
  • Commitment 3: Systemic Risk Analysis (Measures 3.1 - 3.6): Signatories commit to analyse each identified systemic risk by collecting relevant evidence, conducting state-of-the-art model evaluations, modelling how the risk could materialise, and estimating its probability and severity. This includes maintaining post-market monitoring to assess whether the risk remains acceptable and whether the Model Report must be updated.
  • Commitment 4: Systemic Risk Acceptance Determination (Measures 4.1 - 4.2): Signatories define and apply clear criteria to determine whether each identified systemic risk, and the model’s overall risk profile, are acceptable. This may include capability-based risk tiers, supported by safety margins to account for uncertainty in future developments, evaluation methods, or mitigation effectiveness. Signatories commit to proceed with development, deployment, or use only if risks are determined to be acceptable; otherwise, they must apply appropriate mitigations and re-assess.
  • Commitment 5: Safety Mitigations (Measure 5.1): Signatories commit to implement appropriate safety measures throughout the model lifecycle to ensure that systemic risks remain acceptable. Mitigations must be robust against adversarial attempts such as fine-tuning or jailbreaks and must reflect the model’s release strategy. Examples include data filtering, output monitoring, fine-tuning for refusals, staged access, transparency tools, and downstream risk controls.
  • Commitment 6: Security Mitigations (Measures 6.1 - 6.2): Signatories commit to apply cybersecurity measures to protect against unauthorised access, release, or theft of models with systemic risk. These must be aligned with a defined security goal addressing foreseeable threat actors, including insiders, and must remain in place until the model is either made publicly available or securely deleted. The obligation does not apply to models whose capabilities are inferior to any model whose parameters are already publicly downloadable. Where standard security measures are not used, alternative measures must achieve equivalent mitigation objectives.
  • Commitment 7: Safety and Security Model Reports (Measures 7.1 - 7.7): Signatories commit to submit a Model Report to the AI Office before placing a model on the market and keep it updated. The report must describe, at a minimum, the model’s architecture, training methods, intended and foreseeable use cases, systemic risks identified, justifications for risk acceptance, mitigations applied, and any relevant external evaluations or reports. Model Reports must be updated whenever the justification for systemic risk acceptance is materially undermined. In addition, for any model that the provider considers among its most capable and that remains under active development, the Model Report must be updated at least every six months. This duty does not apply where (i) the model has not changed, (ii) a more capable model is expected to be placed on the market within the next month, or (iii) the model is demonstrably as safe or safer than it was at the last reporting date.
  • Commitment 8: Systemic Risk Responsibility Allocation (Measures 8.1 - 8.3): Signatories commit to assign clear responsibilities for systemic risk oversight, management, support, and assurance across the organisation. Ensure adequate human, financial, technical, and informational resources. They commit to promote a healthy risk culture through non-retaliation guarantees, internal reporting channels, and clear communication of the Framework, with periodic assessments by the supervisory body or equivalent.
  • Commitment 9: Serious Incident Reporting (Measures 9.1 – 9.4): Track, document, and report serious incidents without undue delay. Initial reports must be submitted within 2, 5, 10, or 15 days depending on the type of harm, followed by updates every four weeks until resolution and a final report within 60 days of resolution. Retain all incident records for at least five years.
  • Commitment 10: Additional Documentation and Transparency (Measures 10.1 - 10.2): Signatories maintain internal documentation on the model’s architecture, integration, risk evaluations, and mitigations for at least ten years after placement on the market. Provide this to the AI Office upon request. Where necessary to assess or mitigate systemic risk, Signatories publish summarised versions of the Framework and Model Reports, subject to redactions to preserve mitigation effectiveness or confidentiality.

Next steps

By 2 August 2025, the European Commission and Member States will assess the GPAI CoP and may approve it via an adequacy decision. The EC might formally adopt the GPAI CoP thereafter via an implementing act. In such a case, the GPAI CoP obtains general validity, meaning that adherence to it becomes a means to demonstrate compliance with the AI Act - but this will not be a presumption of conformity. The GPAI CoP is final for now, but the AI Office can encourage reviews and updates, in particular in light of emerging standards.

Tags

ai, eu ai act, eu ai act series