The EU’s Code of Practice on Disinformation is Now Part of the Digital Services Act. What Does It Mean?
Ramsha Jahangir / Feb 25, 2025
A pedestrian walks in front of the entrance to the European Commission headquarters in Brussels, Belgium on July 1st, 2021. Shutterstock
The European Commission has taken a significant step towards strengthening its regulatory approach to mitigating online disinformation. Earlier this month, the Commission officially integrated its voluntary Code of Practice on Disinformation into the Digital Services Act (DSA) framework. This isn't merely a procedural update; it's a strategic pivot, positioning the Code as a "relevant benchmark" for platform compliance. Effective July 1st, this move transforms the Code from a voluntary agreement into a cornerstone of the DSA, with potential implications for enforcement actions against online platforms that fail to adhere to its provisions.
But is this integration anything more than a regulatory sleight of hand? Tech Policy Press asked experts whether this integration will provide the tools necessary to effectively mitigate disinformation while preserving free expression.
From promise to practice
Under the DSA, very large online platforms (VLOPs) or very large online search engines (VLOSEs) are required to conduct risk assessments and audits to identify and mitigate systemic risks on their platforms, including the spread of disinformation. By adhering to the Code, platforms must demonstrate they are taking concrete steps to address these risks.
“The Code will serve as a crucial benchmark for assessing DSA compliance and ensuring the accountability of tech platforms to a significant extent,” Stanislav Matejka, Vice-Chair of the European Platform of Regulatory Authorities (EPRA), told Tech Policy Press. “Since the DSA does not explicitly define systemic risks related to disinformation, the Code outlines concrete measures that signatories apply to combat these risks. This comprehensive approach adds substantial value to the Commission's work in evaluating VLOPSEs' compliance with the DSA on an individual basis, thereby greatly enhancing overall accountability,” he said.
Unlike the Commission's lack of guidance on what qualifies as “systemic risk” under the DSA, the comprehensive Code provides a structured framework outlining more detailed and technical guidance — including specific quantitative and qualitative KPIs — that platforms can implement to reduce the prevalence and impact of disinformation.
Addressing free expression concerns related to the DSA, experts clarified that both the DSA and the Code aim to mitigate platform-driven disinformation risks rather than mandate content removal. “The Code helps break down complex risks into reporting indicators. We’ve spent a lot of time considering what effective and proportionate risk mitigation means in the real world when it comes to misinformation,” said Stephan Mündges of the European Fact-Checking Standards Network (EFCN).
“The Code is voluntary, but platforms not signing the Code are still required to show they are taking equivalent measures against disinformation to demonstrate DSA compliance,” Mündges added. “For instance, if Meta were to retreat their fact-checking commitments and introduce Community Notes in Europe, they will have to demonstrate its effectiveness,” said Mündges, pointing to research on the impact of fact-checkers in Community Notes.
Benchmark or regulatory sleight of hand?
The Code's provisions, while comprehensive, face the inherent challenge of implementation. The European Union launched its first Code of Practice on Disinformation in 2018. An updated version came into effect in 2022 in response to a call by the European Commission to strengthen the commitments. Initially, the Code attracted a broad range of major tech companies, signaling a collective effort to address online disinformation. These companies agreed to implement various measures, including demonetizing disinformation, ensuring transparency in political advertising, maintaining service integrity, and empowering users and fact-checkers.
However, the code has recently seen significant rollbacks – including at X (which left the Code altogether following Elon Musk’s acquisition), Google, and Microsoft. According to a report from Democracy Reporting International (DRI), between 2022 and 2025, platforms reduced the number of measures committed to in the CoP by 31%. These withdrawals raise critical questions about the Code's long-term viability and ability to achieve its objectives.
DRI's recent report also found that:
- Microsoft and Google (including Search and YouTube) fully withdrew from fact-checking measures. Overall, Microsoft made the most significant rollbacks.
- Meta has maintained its fact-checking commitments for now, but its long-term support remains uncertain.
- Google, Microsoft, and TikTok withdrew from all political advertising measures, citing their bans on such ads. However, DRI and others have documented cases of users bypassing these bans.
Expressing disappointment over many of the biggest online platforms deciding not to commit to the fact-checking chapter within the Code, the EFCN regarded its conversion into DSA as “a strong signal that the EU is committed to enforcing its laws despite threats and pressures from abroad, but the actual enforcement is key to evaluating if the Code’s objectives are met.”
Claes de Vreese, University Professor of Artificial Intelligence and Society at the University of Amsterdam, argued it's time for a “game changer,” noting that the voluntary code was not doing its intended work and companies were walking away. “The Code needs proper implementation, monitoring, and compliance assessment. This is absolutely on the European Commission to ensure this,” he told Tech Policy Press.
Effective compliance assessment is the linchpin of the DSA's ability to regulate online platforms, and audits are an essential part of the process. According to Mündges, the DSA leaves significant 'wiggle room' for platforms to set their own benchmarks for auditors to assess compliance. In addition, due to a lack of guidance from the Commission, there is no way to measure the quality of these audits.
“As highlighted by the conclusion of the European Board for Digital Services, these audits need to be carried out by experts who understand the complexities of disinformation and platform systems. That means relying on professionals with real experience—whether from practice, academia, or research—who can apply independent, critical judgment,” said EPRA’s Matejka. He believes the proof will be in the pudding, stating whether these measures were sufficient would become clearer after the first round of Code’s audit reports and feedback from all relevant stakeholders—including platforms, auditors, the Commission, and Code signatories.
Bureaucratic complexity
Beyond platform non-commitment, experts worry Brussels’ bureaucratic complexity may overshadow the code’s effectiveness, leaving fundamental issues unresolved.
Alexandre Alaphilippe, Executive Director of EU DisinfoLab, is most concerned about the “continued repetition of the same cycle that has occupied Brussels for the past eight years.” “A new “co-regulatory” mechanism is introduced — first the Code of Practice, then the “strengthened” Code of Practice, now the Code of Conduct — followed by risk assessments, implementation phases, audits, adaptations, and re-audits. Eventually, a new and supposedly improved framework is designed and launched, only for the cycle to begin again,” he said.
Another key concern raised by experts is the stark asymmetry of resources. The Code of Conduct expects civil society to expand its platform monitoring role yet offers no funding. “The asymmetry of means and expectations consistently favors platforms, despite some repeatedly failing to fulfill their commitments or openly declaring their intent to contest any regulation, both legally and politically,” said Alaphilippe.
Political pressure from the US
The Code’s integration also comes as Europe's "super-regulator" status fuels transatlantic tensions, placing EU tech rules at the center of a growing US-EU standoff. As Washington watches, will Brussels’ regulatory resolve crack under pressure?
“At a time we could possibly be celebrating the transition of the Code of Practice into a Code of Conduct, big questions around its effectiveness must be raised,” said Colette Wahlqvist of Copenhagen-based International Media Support (IMS). She highlighted that Meta's fact-checking exit [in the US] was particularly troubling for EU candidate countries like Moldova and Ukraine, who, lacking DSA protections, relied heavily on the Disinformation Code of Practice as a negotiation tool. “Meta's recent decision, preceded by X's pull out from the Code, and a general increase of mistrust in media and information sources looming over us is an elephant in the room that cannot be ignored,” she added.
Some experts think compliance is already a challenge. “The Code is an important but not sufficient step. We see that US tech companies are turning to the new US administration to shy away from transparency and responsibility,” said de Vreese.
According to Claire Pershan of the Mozilla Foundation, companies withdrawing their fact-checking commitments is just the tip of the iceberg. “Things like data access for researchers may also be at risk. The code is a multi-stakeholder achievement, but it will not serve as a sufficient benchmark to judge these companies’ commitment to truly tackling disinformation and upholding the spirit of the DSA,” she said.
Others argued the EU had the capacity to defend freedom of expression against platform-driven censorship. “The question should not be if Brussels will uphold its commitment, but rather how swiftly and effectively it will act,” said Alaphilippe.
Authors
