This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.

Freshfields TQ

Technology quotient - the ability of an individual, team or organization to harness the power of technology

| 7 minute read

EU AI Act unpacked #19: General-Purpose AI Code of Practice – an overview

As forecasted in our previous blog post, the AI Office published the ‘First Draft General-Purpose AI Code of Practice’ (the draft CoP) on 14 November 2024. The draft CoP specifies the obligations under the AI Act for providers of general-purpose AI models (GPAI models) and for providers of GPAI models with systemic risk, but also introduces new obligations not included in the AI Act. A comply or explain mechanism applies: If providers choose not to rely on the CoP, they will need to demonstrate compliance using alternative adequate means for assessment by the European Commission. More generally, the finalised text of the CoP will most likely heavily influence how the European Commission interprets and applies the provisions of the AI Act going forward. 

The first draft is the result of joint effort between hundreds of participants from industry, academia, and civil society, chosen for their backgrounds in computer science, AI governance and law – appointed by the AI Office and sorted in four thematic working groups:

1)         Transparency and copyright-related rules;

2)         Risk identification and assessment for systemic risk;

3)         Technical risk mitigation for systemic risk;

4)         Governance risk mitigation for systemic risk. 

Accordingly, the commitments of the draft CoP concern (1) transparency requirements, (2) compliance with EU copyright law, (3) a risk taxonomy, (4) risk identification and mitigation and (5) risk governance measures.

More detailed transparency requirements and an acceptable use policy

Under the AI Act GPAI model providers have to create a technical documentation of their AI models and to provide it to the AI Office and national competent authorities upon request. Likewise, GPAI model providers must provide model documentation to downstream providers (ie those integrating the model into their own AI systems). The draft CoP substantiates the elements that such documentation must include. This concerns, for example, requiring more detailed information on the model’s intended tasks, architecture, and design specification as well as training, testing, and validation data, such as on data acquisition methods (eg web crawling/scraping, data licensing).

The CoP emphasises the importance of and details the content of the Acceptable Use Policy (AUP) for GPAI models. To that end, the draft CoP contains a list of essential elements of an AUP. For example, a sufficiently detailed AUP must include a purpose statement, acceptable and prohibited uses, security protocols, and termination policies for misuse.

More details on the copyright policy and compliance with EU copyright law

Under the AI Act, GPAI model providers must put in place a copyright policy. The policy must cover compliance with EU copyright law throughout the lifecycle of the GPAI model. This includes honoring opt-outs by rightsholders from the text and data mining (TDM) exception under Article 4(3) EU Copyright Directive (EUCD) which is widely considered to be relevant for the collection of copyrighted content for model training (eg scraping). The draft CoP specifies these requirements:

  • Upstream and downstream compliance: Under the draft CoP signatories commit to ensure that the copyright policy ensures that third-party data sources used for model training comply with copyright laws (upstream compliance) and to mitigate the risk that a downstream system or application, into which a GPAI model is integrated, generates copyright infringing output. According to the draft CoP, upstream compliance should involve performing a due diligence of third party data sets and verifying that any rights reservations by rightsholders have been honored by the third party provider. Downstream copyright compliance includes avoiding ‘overfitting’ of the model (meaning that the model memorizes the training data so closely that it fails to make correct predictions on new data) and contractual safeguards to be agreed upon with the downstream provider. However, the wording of the draft CoP is still ambiguous, requiring further clarification in the next draft.
     
  • Compliance with the limits of the TDM exception: The draft CoP further requires that GPAI model providers engaging in text and data mining for model training  ensure lawful access to data sources and make best efforts in accordance with ‘widely used industry standards’ to identify and comply with TDM opt outs. Specific measures include: Only using crawlers that respect the robots.txt protocol, taking reasonable measures to avoid crawling of pirated websites (eg websites flagged on the EU Counterfeit and Piracy Watch List), and collaborating with relevant stakeholders to identify interoperable standards for expressing copyright reservations in a machine readable format.
     
  • Transparency: The rules of the draft CoP also require being transparent about the copyright measures taken. Signatories to the CoP must publish on their website their copyright compliance measures, including information about tools used for crawling and identifying rights reservations. Further, they are required to set up a single-point of contact for handling copyright complaints from right holders and their representatives (eg collective management organisations). This obligation is completely new and not required under the AI Act. Signatories must also document their data sources and authorizations to ensure lawful data usage for AI model development.

A taxonomy of systemic risk

Under the AI Act providers of GPAI models with systemic risk (i.e. the most powerful models), must perform model evaluations and assess and mitigate systemic risks. The draft CoP now introduces a taxonomy of systemic risks intended to serve a basis for the systemic risk assessment and mitigation. 

  • The risk taxonomy lists the types of systemic risk, nature of systemic risks and sources of systemic risk that providers should consider in their assessment. The types of risks go beyond the systemic risks identified under the AI Act and seem to be inspired by international sources such as the G7 Code of Conduct (see our previous blog post). 
     
  • The nature of systemic risk refers to key characteristics of risks that influence the risk assessment and mitigation and includes attributes such as the origin of the risk (model capabilities, model distribution), novelty of the risk, velocity at which the risk materialises. 
     
  • The sources of systemic risk or ‘risk factors’ refer to elements that alone or in combination give rise to risks. These include dangerous model capabilities (eg cyber-offensive capabilities, Chemical, Biological, Radiological and Nuclear (CBRN) capabilities, dangerous model characteristics (such as bias, tendency to deceive, lack of reliability) or contextual elements such as the number of business users and end-users.

A Safety and Security Framework, Safety and Security Reports and more details on risk identification & mitigation 

The draft CoP provides for more detailed rules on the risk assessment and mitigation for GPAI models with systemic risk: 

  • Safety and Security Framework (SSF): Under the draft CoP signatories need to set-up a Safety and Security Framework (SSF) detailing their risk management policies to proactively assess and mitigate systemic risks. The comprehensiveness and commitments of the SSF should be proportionate to the severity of the expected risk.
     
  • Risk identification and risk analysis: Signatories to the CoP commit to continuously identify systemic risk using the CoP’s risk taxonomy. The CoP includes further requirements for analysing these risks, eg to  identify the probability of the risks materialising through specified pathways, and requires signatories to categorise the sources of risk into ‘tiers of severity’.
     
  • Evidence collection and model evaluation: As part of the Safety and Security Framework signatories commit to use different methods to collect evidence of the systemic risk presented by their model and to run evaluations to assess the capabilities and limitations of their GPAI models following the rules provided by the CoP.   
     
  • Risk assessment lifecycle: The risks have to be assessed continuously throughout the model’s life cycle. The draft CoP divides the lifecycle into different stages: (1) before training, (2) during training, (3) during deployment and (4) post deployment.
     
  • Risk mitigations: In the Safety and Security Framework signatories commit to map each systemic risk tier of severity to adequate safety and security mitigations. Safety mitigations include safeguards placed around the model for deployment in a system, security mitigations include methods such as red-teaming, or insider thread screening.
     
  • Safety and Security Reports (SSR): Signatories commit to draft and regularly update Safety and Security Reports (SSR) for each GPAI model they develop to document risk and mitigation assessments. These SSRs are meant to be used as the basis for any development and deployment decisions for that model. Processes for proceeding or not proceeding with further development or deployment are to be included in the Safety and Security Framework.

Risk governance measures

Going beyond the requirements under the AI Act, the CoP signatories commit to address systemic risks by allocating responsibility and resources for these risks at the executive level and board level (or equivalent).

Further governance measures include: 

  • Assessing annually the Safety and Security Framework involving independent experts.
  • Identifying, keeping track of, documenting and reporting serious incidents.
  • Implementing whistleblowing channels and affording appropriate whistleblowing protections.
  • Publishing the SSF and the SSRs.

Relevance of the CoP

Adherence to the CoP will be voluntary and signatories can use adherence to the CoP to demonstrate compliance with certain AI Act requirements, until a harmonised standard covering the same AI Act obligations is published (see our blog post). However, a comply or explain mechanism applies: If providers choose not to rely on the codes, they will need to demonstrate compliance using alternative adequate means for assessment by the European Commission.

In addition, the CoP commitments might influence the AI Office’s understanding and interpretation of the GPAI model provider requirements covered by the CoP. The CoP has also the potential to define ‘industry standards’ relevant to complying with the limits of the TDM exemption.

Next steps and expected timeline

As a first draft the CoP  is still subject to further refinement: following an iterative process of internal discussion within the Working Groups and additional external input from stakeholders, measures may be added, removed or modified. The first draft does not yet contain the final level of granularity but rather aims at setting out broad agreements on the structure and principles of the code.

The stakeholders participating in the drafting of the CoP have discussed the first draft in thematic working groups and the Chairs presented key insights from the discussions to the full plenary, made up of nearly 1,000 participants, on 22 November 2024. Until 28 November 2024 the draft CoP was open to further stakeholder feedback. Now, the Chairs will modify the draft on the basis of the feedback received. 

Further discussion and drafting sessions are scheduled until the end of April 2025 and we will likely see the publication of the 2nd draft in the week of 16 December 2024 and the third draft in the week of 17 February 2025.

The final text will be presented at a closing plenary expected to be held by the end of April 2025, where GPAI model providers will be able to express whether they plan to commit to adhere to the rules of the final CoPThe AI Office plans to publish the final version of the General-Purpose AI Code of Practice by 1st May 2025.

Tags

ai, eu ai act, eu ai act series