Beyond Disinformation: How DSA Risk Assessments Ignore Democracy’s Real Threats
Orsolya Reich, Sofia Calabrese / Mar 19, 2025Dr. Orsolya Reich is a senior advocacy officer at the Civil Liberties Union for Europe, and Sofia Calabrese is a digital policy manager at the European Partnership for Democracy.

Pro democracy march in Berlin, Germany, before the election of the European parliament (June 8, 2024, Leonhard Lenz, CC0, via Wikimedia Commons).
The European Union’s Digital Services Act (DSA) was intended to curb the unchecked power of giant digital platforms by requiring them to assess and mitigate risks related to their impact on, inter alia, civic discourse and electoral processes. Many had hoped online platforms would use this process for genuine self-reflection, examining their own systems, acknowledging the harms they cause, and implementing meaningful reforms. Yet, when the first reports were published in late 2024, it quickly became clear to analysts, civil society experts, and academics that this regulatory approach fell short of expectations. As analyses in Tech Policy Press and elsewhere have pointed out, the published reports are largely self-referential, vague, and evasive, raising more questions than they answer. However, while many previous analyses have focused on the methodological flaws and missing information broadly, few have examined how these risk assessments specifically address the integrity of civic discourse and electoral processes. This gap in scrutiny is precisely what we intend to address.
While 19 Very Large Online Platforms and Very Large Online Search Engines (VLOPs and VLOSEs) published their risk assessments, audit reports, and audit implementation reports, we focused on the six platforms with arguably the most significant influence on civic discourse and elections. Despite their claims of commitment, the platforms under scrutiny — Facebook, Instagram, Google, YouTube, TikTok, and X — show a striking reluctance to consider how their systems influence civic discourse and electoral processes.
Risks to civic discourse
Consider first risks to civic discourse. In our previous research, we argued that the DSA, responding to democratic decline in Europe, aims to protect democratic civic discourse. Its goal is to foster a space for self-governing individuals who can inform themselves, form accurate beliefs, and meaningfully engage in discussions about socio-political developments in their respective Member States and the European Union.
We have defined civic discourse as more than just an exchange of opinions; it is a process through which people inform themselves and collectively reason about the options their communities could choose. For a democracy to function optimally, civic discourse must a) be inclusive, pluralistic, and accessible; b) recognize and respect differences in viewpoints and sociopolitical divisions; c) show a commitment to facts and informed dialogue, and build citizen awareness and knowledge on pertinent issues; d) enable citizen engagement and representative attention.
Parallelly, our research identified four categories of risks, and at least 13 distinct threats to democratic civic discourse. These are:
1. Risks posed to an inclusive, pluralistic, and accessible civic discourse
- Absence of diversity
- Limited accessibility
2. Risks to recognizing and respecting differences and divisions in civic discourse
- Incivility: disrespectful relations between individuals
- Echo chambers, selective exposure to the like-minded, isolation of perspectives
- Polarisation/extreme views
- Exacerbation of conflict situations
3. Risks to a commitment to facts and informed dialogue, risks posed to building citizen awareness and knowledge on pertinent issues such as misinformation and disinformation.
- Lack of digital literacy
- Lack of trust in governments, media & online platforms
- News avoidance & news fatigue
- Spread of AI-generated deepfakes
4. Risks to enabling citizen engagement and representative attention
- Shadow banning of civic speech by video-sharing and social media platforms
- Overzealous enforcement of copyright laws
- Organised online campaigns targeting civil society
We consider electoral processes to be a specific case of civic discourse. Elections do not exist in isolation; they take place within the broader context of ongoing public debate, political engagement, and information exchange. A democracy cannot claim to have safeguarded its electoral processes while leaving civic discourse on platforms unprotected. If the digital public sphere is plagued by disinformation, polarization, and the suppression of legitimate speech leading up to an election, then measures taken to ensure the integrity of the electoral process itself are merely reactive and insufficient. A healthy, functioning democracy requires continuous protection of civic discourse, not just last-minute interventions as election day approaches.
That said, electoral campaigns introduce additional risks into the digital information ecosystem, risks that build upon and sometimes exacerbate pre-existing weaknesses in civic discourse. At the minimum, we identified the following election-specific risks:
- Spread of contradictory electoral promises, manipulation through micro- and nanotargeting
- Incorrect ad identification by upload filters: mistakenly identifying non-political ads as political and vice versa
- Spread of false information as regards voting processes
- Asymmetric amplification of political content from different electoral contenders
- High-profile politicians’ posts under laxer standards for being demoted or deleted
- Third-party interference
Platforms’ shortcomings
We did not expect every platform to identify the exact same risks we had outlined. However, we had hoped for at least an honest effort to assess the full spectrum of risks and examine how their systems might contribute to them. Instead, what emerges from these reports is a pattern of evasion. Each platform acknowledges only a limited set of risks, and even then, it's often unclear what actions, if any, they're taking to address them. Most focus narrowly on policy violations and intentional misuse of services in the context of elections rather than addressing the broader platform-driven harms to the quality of democratic discourse.
It must be emphasized that this is a fundamental failure. The quality of civic discourse in the years leading up to an election is not a secondary concern but a prerequisite for safeguarding the electoral process itself. If platforms neglect systemic issues in the broader information ecosystem, their last-minute interventions during election periods amount to little more than a bandage on an open wound. Without sustained, structural efforts to protect democratic discourse year-round, election-specific measures will always be too little, too late.
Meta
Take Meta, for example. Its reports on Facebook and Instagram apparently identify 122 discrete risks “associated with the 19 Problem Areas and 8 Systemic Risk Areas” (p. 5 Facebook, p. 5 Instagram). The list of risks is either well-hidden in the texts or edited out before publishing. However, already the list of problem areas Meta associates with the systemic risk area “Civic Discourse & Elections” on page 17 of both reports (Account Integrity & Authentic Identity, Bullying & Harassment, Coordinating Harm & Promoting Crime, Dangerous Organisations & Individuals, Discrimination / Discriminatory Actions, Disinformation, Hate Speech, Inauthentic Behaviour, Misinformation, Violence & Incitement, Voice & Free Expression) reveal a deliberate framing. Meta avoids acknowledging structural risks embedded in its platforms by framing problem areas and the associated risks mostly in terms of harmful behaviors (e.g., disinformation, hate speech, fraud) and vile actors.
This framing allows Meta to discuss mitigation measures like content removal, fact-checking partnerships, and account suspensions rather than addressing whether their platforms structurally disadvantage certain viewpoints, amplify others, or reinforce ideological silos. Despite extensive academic literature on this issue, the absence of any engagement with algorithmically driven potential or actual harm to civic discourse signals a deliberate omission rather than an oversight. The description of the recommender systems (pp. 27-29 Facebook, pp. 27-29 Instagram) is highly generic, offering little substantive insight beyond vague assurances that Meta has tested, analyzed, and improved its systems. When discussing mitigation measures (pp. 65-88 Facebook, pp. 63-85 Instagram), election-related risks feature predominantly, putting every other related issue in the backseat.
The lack of differentiation between Facebook and Instagram is especially revealing. The two platforms serve distinct audiences, with Instagram's engagement dynamics heavily driven by influencer culture and visual content. Yet, its risk assessment is essentially a duplicate of Facebook’s. In our view, this fact alone shows the lack of substance in Meta’s reports, which offer little meaningful insight despite their heavily packed 95 and 93 pages.
Google’s report explicitly accounts for several risks to the civic discourse we have identified in some form, including risks for inclusivity (p. 24), harmful content (including polarisation on p. 58 and crises on page 103), content removals and de-ranking (p. 58) as well as once again a very strong focus on disinformation (p. 58). Election-specific risks are also identified when it comes to incorrect identification of political ads (p. 35), foreign influence campaigns (p. 26), and election disinformation (p. 103).
However, the company’s risk assessment focuses overwhelmingly on election-related disinformation, sidestepping the broader issue of how Google Search rankings and YouTube recommendations influence public understanding of political and social issues. Google emphasizes efforts to promote authoritative sources but provides little insight into how its algorithms make these determinations. There is also a glaring lack of transparency around recommender systems, arguably one of the most consequential features affecting civic discourse.
YouTube, despite its immense role in shaping political discourse, takes a similarly narrow view, largely limiting its assessment to disinformation and election integrity (p. 101 and p. 103). While it references demonetization as a mitigation measure, it does not seriously engage with concerns about algorithmic amplification, echo chambers, or the spread of extreme content. The report briefly mentions adjustments to recommender systems, but there is no meaningful discussion of how these changes impact content visibility or public discourse.
TikTok
TikTok’s report partially acknowledges the risks to civic discourse, particularly regarding disinformation, but falls short of implementing meaningful mitigation measures.
For civic discourse, the main risks that TikTok takes into account in its report are linked to the spread of disinformation. The platform identifies political disinformation as a Tier 1 risk but classifies its overall impact as moderate, an assessment that appears to downplay the platform’s growing influence on political narratives. As part of the risks related to disinformation, TikTok also includes a lack of media literacy and the spread of deep fakes, AI-generated content, and synthetic media (p. 35). Related mitigation measures that are put forward are the cooperation with fact-checkers to identify and remove disinformation content, funding media literacy initiatives, and labeling obligations and prohibitions for different kinds of AI-manipulated media (p. 36). Furthermore, while content moderation of harmful content is taken into account, there is no explicit mention of addressing risks linked to echo chambers and polarisation. Finally, the report does not genuinely engage with risks related to inclusivity, accessibility, and citizen engagement.
Regarding election-specific risks, the report mentions election disinformation and the monitoring and removal of foreign influence operation campaigns (p. 36). Still, it fails to address the role of influencers in shaping public opinion, an area where TikTok plays a uniquely powerful role compared to other platforms. The issue of political advertising, which persists despite TikTok’s nominal ban (p. 29), also remains unresolved, with the report offering little clarity on the enforcement effort in reviewing the ads. In the aftermath of the 2024 Romanian elections, it is evident that this is a significant and very concerning gap.
Similarly to other reports, TikTok mostly focuses on external threats and only considers a narrow range of risks, mostly linked to disinformation, repurposing some of the efforts that have already been undertaken on the issue. Systemic risks linked to TikTok’s feed and recommender systems and related mitigation measures are not explored in depth.
X
Unsurprisingly, X (formerly Twitter) is no positive exception here. Just like the other platform, X
treats civic discourse risks mainly as a problem of harmful content rather than an issue of platform governance. However, it goes beyond that and, semi-explicitly, rejects any real accountability for the structural ways its own design choices shape democratic engagement. X’s (mis)interpretation of their own 2020 research on amplification (p. 55) and the highly dismissive approach to it is particularly striking in light ofrecent research showing how “engagement measures and unknown factors related to party affiliation contribute to the overrepresentation of extremes of the German political party spectrum in the default algorithmic feed of X” in 2025.
Where X takes a radical departure from the rest is that it frames content moderation itself as a systemic risk to freedom of expression (p. 56) and repeatedly emphasizes how they are one of 6-7 platforms an average social media users use and their mission is to empower users to interact freely, even when what they have to say is offensive or controversial. The report proudly emphasizes its Community Notes feature to counter disinformation, the effectiveness of which is known to be contentious.
Conclusion
To sum up, the risk assessments published under the DSA in November 2024 focus narrowly on election integrity and disinformation, neglecting broader risks to civic discourse, political participation, and social cohesion. While disinformation is an important issue, other systemic harms are insufficiently addressed. Platforms should expand their risk assessments to encompass all aspects of civic discourse rather than limiting their scope to a handful of election-related risks, where even then, the primary focus is disproportionately on disinformation.
A rigorous and standardized risk assessment process, with stakeholders meaningfully engaged, is essential for understanding and addressing the challenges affecting civic discourse and electoral integrity online. At a time of democratic backsliding in Europe and beyond, it is more critical than ever to leverage all available tools to reinforce democratic institutions and uphold a healthy civic discourse and election integrity across all EU member states.
Authors

