Home

Donate
Analysis

What the EU’s New AI Code of Practice Means for Labeling Deepfakes

Natalia Garina / Jan 7, 2026

Image by Comuzi / © BBC / Better Images of AI / Surveillance View A. / CC-BY 4.0

The European Commission is working on the Code of Practice on Transparency of AI-Generated Content, a voluntary soft-law instrument that facilitates the clear labeling and marking of synthetic media—media created or manipulated with artificial intelligence. The Code will help those involved in the development and use of AI to clearly disclose AI-generated video, audio, images, and text, as required by the transparency obligations set out in the EU Artificial Intelligence Act, which is to come into force in August 2026.

Expected to be finalized in May–June 2026, the Code will help establish shared standards and outline practical self-regulatory measures before binding rules come into effect.

On December 17, the European Commission published the first draft of the Code of Practice on Transparency of AI-Generated Content. The document will provide participating parties with a common, practical framework for compliance, including guidance on labeling, watermarking, metadata, and other technical and organizational measures to enable users to identify AI-generated and AI-manipulated content. Thus, the European Union is entering a decisive phase in its effort to govern AI-generated content—text, audio, video, images, avatars and digital twins, code, including deepfakes, which have been rapidly developing into a global concern.

What the Code covers — and what it does not

The Code of Practice applies only to lawful deepfakes, meaning content that does not in itself constitute a violation of the law. In the context of deepfakes, content becomes illegal when it is used for purposes such as non-consensual pornography, defamation, terrorist content, violations of privacy, financial fraud, breaches of electoral law, racist or xenophobic hate speech, or infringements of intellectual property rights. Such content must be promptly removed or otherwise effectively moderated.

European Codes of Practice are soft law tools that are not directly binding but guide behavior, set standards, and influence binding legal rules, such as regulations. In practice, they often have real regulatory effects and serve as a bridge between self-regulation and binding legal rules.

The Code serves two purposes by providing guidance: first, on technical mechanisms—such as watermarking, metadata, content detection, and interoperability standards—that enable the effective and reliable machine-readable marking and detection of AI-generated or manipulated content; and second, on disclosure measures that ensure users are clearly informed and can more easily identify deepfakes and AI-generated text published for the purpose of informing the public on matters of public interest.

The obligations depend on the type of actor involved. A person or entity may act as a provider, or as a deployer, and the applicable obligations vary accordingly.

Providers of AI systems are companies or organizations that develop, build, or place AI systems on the market under their own name or trademark. This category includes developers of generative AI models and tools (OpenAI, Mistral AI, etc), companies that commercialize AI-powered applications or platforms (tech giants such as Microsoft, Google, etc.), and organizations that substantially modify an AI system and make it available to others to use (legal-tech and HR-tech companies, software companies that fine-tune foundation models, etc.).

Deployers do not develop, produce, or place AI on the market under their own name, but they use AI in the course of professional activities. Deployers may include organizations, entrepreneurs, or individual consultants who use AI systems under their own responsibility as part of their professional activities. If AI is used purely for non-professional purposes, the relevant marking requirements do not apply, meaning that private individuals who generate AI content for personal use—even when posting such content on the Internet–and are not subject to the labeling obligations under the AI Act. In this case, responsibility lies with the platform.

Under Article 50 of the AI Act, which sets out transparency obligations, providers of AI systemsmust ensure machine-readable marking and detectability of AI-generated or AI-manipulated content. Deployers must disclose when AI is used to create realistic synthetic content, including deepfakes, by clearly informing users that such content is artificially generated or manipulated. In practice, this means that deepfakes must be labelled even when the content is lawful; however, where content is evidently artistic, creative, satirical, or fictional, only minimal and non-intrusive disclosure is required.

What is in the first draft of the Code of Practice

Two working groups are involved in developing the Code. One group focuses on guidance for marking and detecting AI-generated content, looking at the technical feasibility and how such measures can work across different types of content and emerging use cases (guidance for providers). The other group focuses on disclosure requirements for deepfakes and certain AI-generated text, concentrating on how information is presented to users and how it is understood in practice (guidance for deployers).

The first draft is relatively broad, setting out the overall structure, commitments, and measures, and it is expected to be updated on a regular basis.

At its core, the framework requires signatories to put in place clear and consistent internal processes to identify and classify deepfake content. This means relying not only on automated detection tools, but also on human oversight, and taking into account the context of use, the target audience, the distribution channels, and any applicable exceptions, such as law enforcement uses or content that is artistic, creative, or satirical.

The obligation to clearly disclose the artificial origin of a deepfake to users lies with deployers. The first draft of the Code suggests the following: once content is classified as a deepfake, it must be disclosed clearly, in a distinguishable and timely manner, at the latest at the moment of first exposure. Disclosure is based on the use of a common icon, accompanied where necessary by disclaimers adapted to the specific content modality.

Real-time video should display a persistent but non-intrusive icon together with a disclaimer at the beginning of disclosure; non-real-time video may rely on a combination of opening disclaimers, a persistent icon, and end credits; multimodal content must display a visible icon without requiring user interaction; images must include a clearly visible, fixed icon; and audio-only deepfakes must use audible disclaimers, repeated for longer formats and combined with visual cues where a screen is available.

Special rules apply to creative, artistic, satirical, or fictional works. In these cases, disclosures should be designed so as not to interfere with the integrity, enjoyment, or normal exploitation of the work, while still ensuring that audiences are informed and that the rights and freedoms of third parties are protected. This includes safeguards for depicted or simulated persons, in order to avoid violations of privacy, dignity, and other rights and freedoms.

How does existing EU regulation address deepfakes

There is no unified EU-level law on deepfakes. The EU regulates deepfakes through several interconnected instruments, relying on a developed content-moderation framework. Two primary mechanisms are used to address deepfakes through content-moderation measures: first, the AI Act, and second, the Digital Services Act (DSA). The General Data Protection Regulation (GDPR) forms a base for the regulative approach.

The AI Act does not regulate content as such. Instead, it regulates AI systems and their use, including AI systems that generate or manipulate content, the technological process and the actors involved. In the AI Act, deepfakes are regulated through transparency requirements, mandatory labeling, and technical obligations.

The Digital Services Act (DSA) imposes transparency obligations on platforms. It represents a layered framework for content moderation and addresses the problem of deepfakes by holding platforms accountable for the dissemination of illegal and disinformation content, categories into which deepfakes frequently fall. The strengthened Code of Practice on Disinformation, released in 2022 and now integrated into the DSA framework, originally served as a voluntary initiative to combat disinformation, including manipulated content. After its incorporation into the DSA, it requires identification and labeling of manipulated content, cooperation with fact-checkers and researchers, and data sharing to improve detection tools.

In early 2026, French authorities launched an investigation into the dissemination of non-consensual sexually explicit deepfakes generated using Grok, X’s artificial intelligence system. The images, which digitally “undressed” women and teenagers without their consent, were reported by French government ministers as manifestly illegal and were referred to prosecutors and the national regulator under the DSA. Dissemination of these images violates national laws and content moderation provisions of DSA.

This case does not primarily concern a failure to label AI-generated content. It highlights a failure of content moderation and proper removal of illegal content, as such non-consensual deepfakes and sexualized images of children should not be disclosed or labelled, but should not be allowed at all. In situations of this kind, the Code of Practice on marking and labeling deepfakes is of limited relevance, while effective enforcement of the DSA and national criminal laws becomes decisive.

While the EU has taken a proactive approach, requiring transparency, labeling, and risk mitigation, concerns remain. What was once seen as a solid ground for the regulation of AI generated-content is being reconsidered in light of the European Commission’s Omnibus proposal on simplification of the digital legislation. The Omnibus proposals don’t directly repeal the EU’s deepfake rules, but they can delay, soften and re-frame the environment in which those rules will operate, while at the same time making it easier to train powerful generative models on Europeans’ data. That combination could reasonably be expected to increase deepfake volume and quality.

As a result, the EU may be required to reconsider existing regulatory tools and develop new approaches to address the evolving risks associated with synthetic media.

Authors

Natalia Garina
Natalia Garina is a legal researcher and consultant specializing in AI policy, digital rights, and freedom of expression, with a background in law and political science. She holds an LL.M. in Digital Law from the Catholic University of Lyon, where she defended a thesis on the regulation and moderati...

Related

The Imperative of Collective Action Against the Threat of DeepfakesMarch 28, 2024
News
Denmark Leads EU Push to Copyright Faces in Fight Against DeepfakesOctober 7, 2025
Podcast
The Policy Implications of Grok's 'Mass Digital Undressing Spree'January 4, 2026
Analysis
Scrutinizing the Many Faces of Political DeepfakesNovember 17, 2025

Topics