This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.

Freshfields TQ

Technology quotient - the ability of an individual, team or organization to harness the power of technology

| 5 minute read

EU AI Act unpacked #8: New rules on deepfakes

[You can find all episodes of our EU AI Act unpacked blog series by clicking here.]

In this next instalment of our EU AI Act unpacked blog series we take a closer look at the new rules on deepfakes.

In today’s world, the use of artificial intelligence (AI) has introduced both unprecedented opportunities and significant challenges. Among these challenges, the emergence of deepfakes (ie synthetic media created or manipulated using AI) on a large scale poses a complex challenge to individuals, organisations, and society as a whole. University College London (UCL) identified deepfake technology as one of the most significant threats to society today. Deepfakes realistic portrayal of individuals and places blur the line between reality and fiction, amplifying concerns about privacy, ethical implications and security threats and impacting public trust in media, public figures or the government. They present challenges in content moderation for news outlets and social media platforms, to name a few. The key challenge is that these systems can be used to create ultra-realistic deepfakes or fake news, and can even be used to contribute to large-scale disinformation campaigns. Whereas the EU has already showcased its intent on regulating deepfakes under existing legal regimes, the EU AI Act will introduce new rules on deepfakes.

What are deepfakes?

The AI Act defines a deepfake as an ‘AI-generated or manipulated image, audio or video content that resembles existing persons, objects, places, entities or events and would falsely appear to a person to be authentic or truthful’. 

By including ‘objects’, the legal definition expands beyond just persons, places, and events to include any tangible or intangible item that could be misrepresented or manipulated through deepfake technology. This could encompass anything from artwork to products. The term ‘entities’ typically refers to organisations, institutions, or other corporate bodies. In the context of deepfakes, this addition implies that the law is not limited solely to individual persons but also covers the misrepresentation or manipulation of entities such as businesses, governments, or non-profit organisations. This inclusion acknowledges that deepfakes can be used to fabricate statements or actions attributed to organisations, potentially causing reputational or financial harm.

While authenticity relates to the appearance of reality, truthfulness involves a deeper consideration of the accuracy or veracity of the content. The AI Act acknowledges that deepfakes cannot only create misleading impressions but also distort reality by presenting false information as true. Requiring truthfulness next to authenticity could lead to a more stringent legal standard. 

The AI Act does not ban deepfakes completely. But it tries to deal with the challenges of deepfakes by imposing strict transparency requirements on both, the providers and the users of AI systems.

Technical marks by providers of AI systems 

Providers (see blog post #3 in our series for a definition) of AI systems generating synthetic audio, image, video or text content, have to ensure that the outputs of the AI system are marked in a machine-readable format and detectable as artificially generated or manipulated. Given the widespread availability and increasing capabilities of AI systems, the rapid pace of technology and the need for new methods and techniques to trace origin of information, the AI Act deems it appropriate to require providers of those systems to embed technical solutions. These solutions would enable marking in a machine readable format and detection that the output has been generated or manipulated by an AI system and not a human.

These technical solutions have to be effective, interoperable, robust and reliable as far as this is technically feasible, taking into account the specificities and limitations of various types of content, the costs of implementation and the generally acknowledged state of the art, as may be reflected in relevant technical standards. Potential methods involve watermarks, metadata identifications, cryptographic methods for proving provenance and authenticity of content, logging methods, or fingerprints.

Such techniques and methods can be implemented at the level of the AI system or at the level of the AI model, including general-purpose AI models generating content, thereby facilitating fulfilment of this obligation by the downstream provider of the AI system. 

To remain proportionate, the AI Act excludes AI systems performing primarily an assistive function for standard editing or AI systems not substantially altering the input data provided by the deployer or the semantics thereof from this marking obligation.

Disclosures by deployers of AI systems 

Deployers (again a definition can be found in blog post #3) of an AI system generating or manipulating image, audio or video content that constitutes a deepfake have to disclose that the content has been artificially generated or manipulated. Such disclosure shall be made ‘clearly and distinguishably’ by labelling the respective AI output accordingly and disclosing its artificial origin. The deployer can choose the best way to make such disclosure, and it will probably vary depending on the media involved. 

The only time when deployers do not need to reveal that such output is created by AI is if the law allows such use to detect, prevent, investigate or prosecute criminal offence.

For AI content that forms part of an ‘evidently artistic, creative, satirical, fictional or analogous work or programme’, the obligation is limited to reveal the presence of such content in a suitable way that does not interfere with the presentation or enjoyment of the work.

If an AI system generates or manipulates text which is published with the purpose of informing the public on matters of public interest, deployers of that AI system must disclose that the text has been artificially generated or manipulated, unless the use is authorised by law to detect, prevent, investigate or prosecute criminal offences or the AI-generated content has undergone a process of human review or editorial control and a natural or legal person holds editorial responsibility for the publication of the content.

What does this mean for deepfakes and AI-generated content?

This dual requirement, technically marking by the provider and labelling the AI output by the deployer, ensures that AI-generated or manipulated media can be readily identified either by the technical mark or the transparent disclosure of its AI origin. In cases of personal non-professional use (ie where users are not considered being deployers under the AI Act), the presence of a technical mark remains essential in identifying ‘private’ deepfakes. Regardless of the disclosure obligation, the technical mark serves as a definitive marker of AI involvement in the creation process.

The AI Office, created by the AI Act and currently being set up, shall encourage and facilitate the drawing up of codes of practice to facilitate the effective implementation of the obligations regarding the detection and labelling of artificially generated or manipulated content. The European Commission may adopt implementing acts to approve those codes of practice or it may adopt an implementing act specifying common rules for the implementation of those obligations. 

The AI Act's transparency rules may or may not be enough to mitigate the risks of deepfakes and help a well-informed society. This will also hinge on whether other businesses within the EU will adopt strong deepfake detection or just look for watermarks and consider it done.

What’s next?

In our next blog, we will explore the enforcement of the AI Act including the role of the AI Office and (other) key actors.

Tags

ai, eu ai act, eu ai act series