This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.
| 5 minute read

EU AI Act Unpacked – update - A First Look at the Draft Code of Practice for AI-Generated Content

The EU AI Office has taken a significant step towards clarifying transparency requirements for providers and deployers of AI systems within scope of the AI Act through the first draft of the Code of Practice on Transparency of AI-Generated Content (Draft Code). This draft, developed by multi-stakeholder working groups, provides the first concrete look at how the transparency obligations under Article 50 of the AI Act could be implemented in practice. It outlines a framework for the marking, detection, and disclosure of AI-generated/manipulated content, impacting the providers who create generative AI systems and the businesses that deploy them.

Once finalised, the Draft Code will serve as a voluntary tool to help providers and deployers of generative AI systems to comply with their obligations under Article 50 AI Act. Although adherence to the finalised Code will not be conclusive evidence of compliance, it will likely become the de facto means of demonstrating compliance. 

The Draft Code also moves the conversation from high-level legal principles to specific, operational requirements that, if committed to, will shape product development, content creation, and compliance strategies. In this post, we unpack the key obligations under the Draft Code and outline the practical steps businesses should consider now.

Key Implications of the Draft Code

The Draft Code is split into two main sections, creating distinct but related obligations for AI "providers" and "deployers" that have committed to the final Code of Practice (Signatories). The Draft Code is structured by setting up overarching commitments which are supported by more detailed measures and sub-measures. 

For Providers: A Mandate for "Mark and Detect" Technology

Section 1 of the Draft Code focuses on Signatories that are providers of generative AI systems – i.e., entities placing these systems on the market. The Draft Code sets out four commitments related to marking and detection of AI-generated and manipulated content (including audio, image, video, or text) for compliance with provider’s obligations under Article 50(2) and (5) AI Act.

  • Multi-Layered Marking: Signatories must implement a combination of active marking techniques that can be implemented at different stages of the value chain and can also be provided by third parties. Marking techniques include metadata embedding, imperceptible watermarking and fingerprinting (such as hash-matching) or internal logging.
  • Detection Mechanisms: Signatories must also enable detection of AI-generated content by providing a free-of-charge interface or a publicly available detector (e.g. an API or a public website) to enable users and third parties to verify whether content was generated by an AI system.
  • Requirements for marking and detection techniques: Technical solutions employed for marking and detection must be effective, robust, reliable and interoperable, taking into account the specific content types, costs of implementation and generally acknowledged state of the art.   
  • Testing, verification and compliance: Signatories are required to set up and implement compliance, including up to date documentation on implemented and planned processes, which is subject to disclosure upon request from competent market surveillance authorities. Signatories must also test and monitor marking and detection solutions, including tracking real world challenges to such technology. There is also a requirement to cooperate with competent market surveillance authorities to demonstrate compliance. 

For businesses that develop or provide generative AI systems (including GPAI models), the Draft Code signals a potentially significant technical and financial undertaking. Signatory providers will need to invest in watermarking and detection systems, ensure they are effective, robust, reliable and interoperable and likely have teams/individuals in place to support the broader tracking, monitoring and engagement related obligations under the Code. Furthermore, Signatories that act as downstream providers building on top of third-party models must ensure the underlying model is compliant or implement their own marking solutions. 

For Deployers: A Duty to Disclose

Section 2 of the Draft Code is applicable to Signatories that are deployers of AI systems – i.e., any business deploying an AI system to generate or manipulate content. Section 2 comprises five commitments for compliance with deployer’s obligations under Article 50(4) and (5) AI Act: 

  • Disclosure of AI-generated and manipulated content: Signatories are required to use a common taxonomy to classify content that qualifies as deepfake or AI-generated or manipulated text that is published with the purpose of informing public interest matters (Public Interest Text). Put simply, this means Signatories must use a common icon – an interim "AI" logo is proposed pending a final EU-wide version – and categorize content as:
    • Fully AI-Generated Content: Content fully and autonomously generated by an AI system without any human authored authentic content.
    • AI-Assisted Content: Hybrid content which has mixed human and AI involvement. The Draft Code includes several examples of the types of content falling within this category, including object removal from photos, beauty filters and AI-generated text that mimics a specific person’s style.
  • Compliance, training and cooperation: Commitments 2 and 3 include broad requirements for Signatories relating to internal compliance, training, monitoring and accessibility, including:
    • Creating, keeping up-to-date and implementing compliance processes and documentation outlining how they apply labelling requirements and the AI logo.
    • For deepfakes and AI-generated/manipulated text, the labelling process cannot only be based on automation, but requires appropriate human oversight.
    • Facilitating third parties/users to flag mis-labeled and non-labeled deepfakes and Public Interest Text and fix any missing or incorrect labels without undue delay.
    • Cooperating with market surveillance authorities and other third parties, including providers of VLOPs/VLOSEs and regulators.
    • Ensuring icons and AI logos are accessible and conform to applicable accessibility requirements under Union law.
  • Specific commitments: In addition, there are specialised, content-type specific rules for how disclosures must appear for deepfakes (Commitment 4) as well as AI-generated/manipulated text (Commitment 5), together with two important exemptions:
    • Artistic and Creative Works: For evidently "artistic, creative, satirical, [or] fictional" works, deep-fake disclosure must be made in a manner that "does not hamper the display or enjoyment of the work" and placed in a “non-intrusive position”.
    • Human Review and Editorial Responsibility: The obligations for AI-generated text do not apply if the content has "undergone a process of human review or editorial control" and a person or entity holds editorial responsibility.

Key Takeaways

While this is only the first draft, it provides a clear roadmap for the future of AI transparency within the EU. The final Code, expected to be published in June 2026 after a period of stakeholder feedback ending in January 2026, will likely be a foundational document for demonstrating compliance with the AI Act. The rules covering the transparency of AI-generated content would become applicable to Signatories on 2 August 2026.

  • For AI Providers: Compliance through "mark and detect". Signatories should consider to integrate a multi-layered marking system (watermarks, metadata) into their generative AI models and provide publicly accessible tools for verification.
  • For AI Deployers: Compliance through “disclose". Any business using generative AI for public-facing content should consider establishing a process for identifying deepfakes and relevant AI-generated text, assess whether any exemptions exist and apply the required labels.
  • Role of Human Review: For deepfakes and AI-generated/manipulated text, human oversight can be an important compliance tool. This means ensuring staff are properly trained, understand the requirements and that there are clear guidelines to determine whether exemptions apply, such as artistic, creative, satirical, fictional deepfakes or editorial control.
  • DSA and AI Act Compliance Landscape: Signatories operating as both online platforms and deployers, compliance will require an integrated approach with the DSA. Signatories should assess their DSA-mandated risk assessments to specifically address the risks of AI-generated/manipulated content. Content moderation policies and tools may require updates to detect markings/labels as prescribed by the Draft Code. Transparency for users (including terms and conditions) will also require review to ensure they understand how AI-generated content is moderated.  

Tags

eu ai act, ai, eu ai act series