This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.

Freshfields TQ

Technology quotient - the ability of an individual, team or organization to harness the power of technology

| 8 minute read

EU AI Act unpacked #15: Will the AI Act remain as is? Level II legislation and Commission guidance

In part 15 of our EU AI Act unpacked blog series, we take a look at when and how the EU AI Act (AI Act) will continue to evolve over the coming months and years.           

More than any other regulation, the AI Act leaves room for secondary legislation and guidance enabling the European Commission (Commission) to further specify certain obligations. 

Harmonised standards and common specifications on obligations for high-risk AI systems and general-purse AI models

As secondary legislation with high practical relevance, the AI Act mandates the Commission to request ‘without undue delay’ one or more European standardisation organisations to draft so-called harmonised standards. These standards may cover the requirements for high-risk AI systems as well as the obligations for providers of general-purpose AI (GPAI) models and GPAI models with systemic risk. In addition, the standards might include processes to improve the resource performance of AI systems (eg, reducing energy consumption of high-risk AI systems), and the energy-efficient development of GPAI models.

Since the adoption of the AI Act, the Commission has already requested the European Committee for Standardisation (CEN) and the European Committee for Electrotechnical Standardisation (CENELEC) to publish ten harmonised standards covering the obligations for high-risk AI systems by 30 April 2025 (see also our blog post #10: ISO 42001 - a tool to achieve AI Act compliance?). CEN-CENELEC has already announced that they will likely not meet the original deadline, but rather publish the standards by the end of 2025.

If a request for harmonised standards fails or is not delivered within the deadline, the Commission has the right to adopt common specifications for these AI Act requirements that apply until when an applicable harmonised standard is published.

Regarding the obligations for GPAI model providers, there is no deadline yet for the adoption of harmonised standards. But once such standards are issued, they will trump the codes of practice (see below in this post) and organisations adhering to the codes will need consider changing to the harmonised standard instead.

The standards under the AI Act will play an important role in practice: High-risk AI systems or GPAI models which are in conformity with a harmonised standard (the reference of which has been published in the Official Journal of the European Union) or common specifications are presumed to be in conformity with the AI Act requirements covered by the standard or the specifications. In particular, a ‘comply or explain’ mechanism applies: If providers of high-risk AI systems or GPAI models choose not to implement the standard or specification, they have to describe the means or justify the technical solutions that meet the relevant requirements.

Codes of practice on obligations for general purpose AI models and transparency obligations

As further secondary legislation, so-called codes of practice will specify the obligations of providers of GPAI models, including with systemic risk. The codes should for example set out the means to ensure that the technical documentation and documentation for down-stream providers is kept up to date, or the adequate level of detail for the training content summary. As regards GPAI models with systemic risks, codes of practice are envisaged to help to establish a risk taxonomy of the type and nature of the systemic risks at Union level and their sources. The codes are also supposed to cover specific risk assessment and mitigation measures and their documentation.

Further codes of practice are expected on the obligations regarding the detection and labelling of artificially generated or manipulated content. 

As with the harmonised standards above, a comply or explain mechanism applies: If providers choose not to rely on the codes, they will need to demonstrate compliance using alternative adequate means to be approved by the Commission.

The AI Office encourages and facilitates the drawing up of codes of practice. To that purpose, the AI Office has launched a multi-stakeholder consultation inviting providers of GPAI models and public authorities to participate in the process. Other stakeholders such as civil society organisations, industry, academia, downstream providers, rightsholders organisations and independent experts, among others, are invited as well to provide with their expertise. This consultation closed on 18 September 2024 and the AI Office is currently analysing all the input received by the relevant stakeholders as their responses will serve as a basis for the initial draft. The drafting exercise started on 30 September 2024 with the first Code of Practice Plenary organised by the AI Office. The plenary was attended by several stakeholders including GPAI providers, downstream providers, industry, civil society, academia and independent experts. 

The AI Act requires that these codes have to be ready by April 2025, and the Commission plans to publish at least the GPAI model codes by this date. If this deadline cannot be met or if the AI Office deems a code is not adequate, the Commission may provide common rules for the implementation of the concerned AI Act obligations. In general, the Commission may decide to either officially approve a code of practice (by way of implementing act) or to specify common rules for the implementation of the relevant obligations, if they do not consider the code to be adequate.

Templates by the Commission

The European Commission and the AI Office will publish templates that will help to meet different AI Act obligations. Templates are expected for:

  • the summary of content used for training of GPAI models. The Commission plans to publish this template within 12 months of the entry into force of the AI Act, meaning by early July 2025.
  • the post-market monitoring plan providers of high-risk AI systems have to set-up and the list of elements to be included in the plan. The AI Act requires this template to be published by 2 February 2026.
  • a questionnaire for the fundamental rights impact assessment certain deployers of high-risk AI systems have to conduct (there is no specific deadline yet).

The European AI Board, which met for the first time on 10 September 2024 to discuss the first deliverables on the implementation of the AI Act, may request to the AI Office in upcoming meetings to draft templates for other areas as well.

Delegated Acts – tweaking the AI Act

By means of delegated acts the Commission has the power to amend several areas of the AI Act, including:  

  • the classification of high-risk AI systems listed in Annex III of the AI Act (Annex III concerns eg AI systems in the area of biometrics, critical infrastructure, education or employment). The Commission may amend Annex III by adding, modifying or removing use-cases of high-risk AI systems and may also adapt the conditions under which such AI system is considered not to be high risk. The Commission will assess the need for amendment once a year following the entry into force of the AI Act.
  • the threshold for classifying a GPAI models as GPAI model with systemic risk. Currently this threshold is met when the cumulative amount of computation used for the training of the model measured in floating point operations is greater than 1025. The Commission plans to adjust this threshold over time and may also supplement benchmarks and indicators to take account of technological and industrial changes, such as improvements in algorithms and increased hardware efficiency, to ensure that the threshold reflects the state of the art.
  • the elements of the technical documentation of high-risk AI systems and general purpose AI models,
  • the conformity assessment procedures, 
  • the information to be included in the EU declaration of conformity.

The power to adopt delegated acts is conferred on the Commission for a period of five years from 1 August 2024 and may be tacitly extended thereafter. 

Guidelines and guidance 

The European Commission will develop several guidelines providing practical advice on how to implement important aspects of the AI Act. 

Guidelines are expected on 

  • the definition of the term ‘AI system’,
  • prohibited practices, 
  • obligations for high-risk AI systems and obligations along the AI value chain, 
  • the transparency obligations for certain AI systems (AI systems directly interacting with natural persons, AI systems generating synthetic audio, image, video or text content, emotion recognition systems and biometric categorisation system, deep fakes, and AI systems generating or manipulating public interest texts), 
  • the provisions related to substantial modification, and
  • the interplay of the AI Act and the product safety legislation listed in Annex I of the AI Act.

The Commission is planning on publishing the first guidelines on the definition of ‘AI system’ and on prohibited practices within the first six months of entry into force of the AI Act, meaning by January 2025. For the other guidelines there is no specific deadline yet. 

Once published, the European Commission may update the guidelines anytime, or at the request of the AI Office or EU Member States.

Later on, further Commission guidance is expected on:

  • the reporting of serious incidents of high-risk AI systems to market surveillance authorities. The AI Act obliges the Commission to publish this guidance by 2 August 2025 and to assess it regularly thereafter.
  • the classification rules for high-risk AI systems together with a list of practical examples of use cases of AI systems that are high-risk and not high-risk. This guidance is expected no later than 2 February 2026.

Although these guidelines and guidance will be non-binding, it will likely be advisable to follow them as they reflect the interpretation of the AI Act by the Commission, who will, mainly through the AI Office, enforce the AI Act at EU level. National regulators and courts will likewise look at these Guidelines when enforcing the Act.

Voluntary application of the AI Act via codes of conduct and the AI Pact

Providers and deployers of AI systems or organisations representing them may draw up codes of conduct for the voluntary application of obligations for high-risk AI systems to other (non-high-risk) AI systems. These codes of conduct can adapt the existing high-risk rules to the intended purpose of the non-high risk AI systems and the lower risk involved and can take into account available technical solutions and industry best practices such as model and data cards.

Codes of conduct can also cover additional requirements, such environmental sustainability, AI literacy measures, inclusive and diverse design and development of AI systems or elements of the Union’s Ethics Guidelines for Trustworthy AI. 

Codes of conduct should be developed in an inclusive way, involving any interested stakeholders and their representative organizations, civil society organisations and academia. The drawing up of the codes will be facilitated by the AI Office and the EU Member States.

By 2 August 2028 and every three years thereafter, the Commission will evaluate the impact and effectiveness of voluntary codes of conduct.

In parallel to the codes of conduct and with the aim of covering the gap between the entry into force of the AI Act and its implementation, the AI Office proposed an ‘AI Pact’ encouraging organisations to adopt principles of the AI Act in advance of its application. The AI Office started a series of workshops in May 2024 involving organisations that had expressed interest in joining this voluntary initiative to share best practices and implementation challenges. As a result, the voluntary pledges agreed by the participants were signed on 25 September 2024. These voluntary pledges include three core actions for the signatories, namely (i) developing an AI governance strategy, (ii) identifying AI systems likely to be categorized as high-risk under the AI Act, and (iii) promoting AI literacy and awareness among staff. 

Key take aways

This timeline may of course change based on practical realities, but gives an indication of how and when the AI Act will continue to evolve and what to look out for.  

Tags

ai, data, eu ai act, eu ai act series, europe