Platforms Report to EU Regulators Under DSA With an Eye on US Politics
Tim Bernard / Dec 16, 2025As the European Commission prepared to issue its first fine against a Very Large Online Platform (VLOP) under the Digital Services Act (DSA), most VLOPs publicly released their 2025 reports on assessing and mitigating systemic risk, as required by Articles 34 and 35 of the DSA. These reports land at a moment of growing tension between EU regulators and US-based platforms, as political shifts on both sides of the Atlantic put new pressure on the DSA’s enforcement and legitimacy. Although embedded in a dense regulatory framework, the reports remain rare documents where tech companies publicly confront the societal risks their platforms may pose, and outline how they claim to be addressing them.
Following last year’s initial release for Tech Policy Press, I wrote about key takeaways from the reports by the major speech platforms. (A quirk of last year’s November release was that some platforms released their 2024 report and others only their 2023 report. My review last year considered whichever report was most recent.)
This article considers what new insights can be gleaned from the latest reports, and in particular changes compared to 2024, for the same basket of platforms covered last year, now including LinkedIn. The analysis focuses on how platforms frame their DSA compliance and engagement with EU regulators against the backdrop of a shifting political climate in the United States, where most of these companies are headquartered, and an increasingly strained transatlantic relationship on tech regulation. In this context, notable shifts appear in how platforms address hate speech, information integrity, and Diversity, Equity, and Inclusion (DEI).
Some comparisons can be hard to make with confidence, as many of the platforms are still revising their report structure and/or methodology, tweaking categorizations, using different data inputs, or giving different weights to existing controls. None of the platforms appears to be fully transparent about their data and calculations (for instance, X has maintained its practice of extensively redacting content moderation figures), making public auditing impossible. While several of the reports clearly sign-posted risk analyses and mitigation measures that were new this year, omissions may have been either for concision, an affirmative decision to lowlight the content, or because the substance no longer applied. Similarly, something new to the report may not be in fact new to the platform.
Despite this, some cross-platform trends did emerge regarding risks and mitigations. There was surprisingly little new material on generative AI risks as a whole (with Google as an exception), despite leaps forward in the last year, particularly with the emergence of truly photorealistic synthetic content. However, generative AI did crop up in a few specific circumstances, such as with regard to nudifier apps and their outputs, and false representations of public figures, including in the context of scams. Other themes in the new material included election risks (the sum of which was lower than the previous year), fallout from the evolving Gaza conflict, sextortion trends, and crisis planning.
The platform reports added details about measures related to a variety of topics that the DSA text and European Commission guidance specifically emphasize. The foremost of these was risks to minors, with issues like researcher access, appeals through out-of-court dispute resolution bodies, reporting processes for ‘trusted flaggers,’ and privacy also appearing with some frequency. Reports also added lists of both recent engagements with civil society organizations on a broad range of issues and detailed some ongoing collaborations that were not mentioned in earlier reports.
Echoes of US politics in the systemic risk assessments
For at least the last year, platforms have been turning right in sync with the demands of the Trump administration, loosening rules on hate speech and misinformation, and rolling back fact-checking and DEI initiatives. Doing so has potentially put them at odds with DSA requirements and EU priorities. The US government has backed the platforms and assailed the DSA as a tool of censorship, most recently in denouncing X’s fine under the Act. But administration officials are unlikely to be trawling through these systemic risk reports, so it is interesting to see how, in the first reports issued during the Trump administration, the platforms represent themselves about these politically-charged issues in materials prepared for European regulators.
Meta’s report makes reference to two seemingly separate circumstances regarding relaxing content policy. The first is a recognition and reversal of supposed over-enforcement in late 2024. The second is the raft of changes in content policy in early 2025 to allow more derogatory language, particularly directed towards immigrants and trans people. In the same time period, Meta ended fact-checking programs in the US as well as internal DEI programs.
The former risk category of “hate speech” is now renamed to “hateful conduct,” matching Meta’s community guidelines. The relaxation of policy is justified as intended “to enable more public debate and discussion about social issues of political and social importance, as well as to reduce possible over-enforcement,” as the company also explained in its public statement in January.
The reports also include a new claim: that this change was preceded by risk assessment and that the platforms implemented “reasonable, proportionate, and effective mitigations to address the potential risks associated with the changes.” The language regarding tradeoffs with speech rights is also slightly stronger, proclaiming “freedom of expression as our north star.” Meta’s Civil Rights team is no longer mentioned (this team has been disbanded), as is the case for the civil rights training that was previously required for staff, per last year’s report.
Disinformation is largely discussed through the lens of “coordinated inauthentic behavior” (CIB), a longstanding Meta classification, and the separate disinformation section in the earlier report is now integrated into this category. A sentence disambiguating mis- and disinformation has also been removed, and an extensive caveat is introduced regarding the lack of “society-wide consensus on what constitutes misinformation and how it should be addressed.” CIB is largely defined and enforced by content-neutral signals, which is clearly more comfortable terrain for the platform. Nevertheless, under that rubric, Russian influence operations are discussed alongside a variety of mitigation efforts, notably still including labeling, promoting reliable info, media literacy initiatives, and fact-checking. This latter endeavor is still present, despite its effective elimination in the US, though it is deemphasized in comparison with the previous year.
For Google / YouTube, the cross-service sections of the report and the YouTube-specific section were reviewed. No mention was made of the removal of gender identity from the YouTube hate speech policy, a decision that might have been deemed relevant to its section on hate speech. Rather, it de-emphasizes hate speech outside of a dedicated treatment by removing it from a couple of places where it formerly served as an example. A related data point is that in Q1 2025, only a quarter of the volume of comments was removed because of hate speech, as compared to Q2 2024, even though there were modestly more videos removed for the same policy category, raising questions about changes in enforcement.
Some small tweaks indicate a change in emphasis for the whole project of content moderation: the word “potentially” is inserted before “harmful” or “problematic” several times, and the phrase “preventing, removing, or raising visibility of certain types of content (i.e., content moderation)” is revised to the rather less explicit “processes for reporting illegal content and by enforcing YouTube’s policies.”
Throughout the report, the terms “misinformation” and “disinformation” no longer appear, replaced by the less charged “misleading information,” and “authoritative sources” are now “high-quality sources.” A few references to COVID and medical misinformation have been removed, as have several sections on election and civic discourse mis- and dis-information efforts. In a sentence about directing users to reliable medical information, the WHO has been removed as a possible source. The company's “prebunking” initiatives are not mentioned, nor is the link to Google's Fact Check Explorer that was present in the prior report. A new “highlight” is “Continued misleading information policies where content could pose a serious risk of egregious harm (emphasis added),” implying a step back from enforcement in all but the most severe of circumstances.
Lastly, a section formerly on “Promoting equity” is now “Equality of access,” and it omits discussion of the work of the platform’s Racial Justice, Equity, and Product Inclusion team and the Inclusion Working group, which were present in the prior report.
LinkedIn also does not mention that it removed its prohibitions on misgendering and deadnaming trans people this year. Health misinformation—in particular, a policy prohibiting contradicting major health organizations on vaccines—is no longer discussed in the body of the report, though the broad category still appears in a chart in an appendix. LinkedIn’s previous report referenced work with Microsoft’s Democracy Forward team on civic integrity issues; that team now appears to have been reorganized into an initiative called “Tech for Society” and is no longer mentioned in the report. A statement about stakeholder engagement “through the EU Code of Practice on Disinformation and the EU Code of Conduct on Countering Illegal Hate Speech” has also been removed. (While not strictly obligatory, both of these codes were formally integrated into the DSA regulatory framework in early 2025.)
X does not appear to have added a lot on politically-sensitive issues, perhaps in part because its perspective was already baked in long before 2025, or because it adopts a less discursive style and hews closely to the text of the previous year’s report. This year’s report does, however, include several statements that appear to run counter to the public positions of the platform and its owner. In particular, it mentions working productively with governments to take down influence operations; participation in the Code of Conduct on Illegal Hate Speech; and working with NGOs on hate speech.
TikTok is a platform currently based outside of the US, though it is perhaps most vulnerable to the whims of the administration due to its US operation’s reliance on its discretionary non-enforcement of the Protecting Americans from Foreign Adversary Controlled Applications Act. (In an October commentary for Tech Policy Press, Mark Scott compared the extreme leverage that the US currently has over TikTok to the more limited power of the EU through the DSA.) Despite this, TikTok, if anything, moved away from the right-wing coded positions on hate speech and information integrity. Gender identity was added to a list of hate speech categories; more detail was added about fact-checking efforts (though a dedicated appendix section was removed); COVID misinformation appeared as a topic of concern. TikTok also described a new climate change awareness program and implicit bias training for its staff. It is worth noting that, just as X’s fine was announced by the Commission, TikTok avoided similar measures through cooperation with the regulators and making binding commitments for improvements.
Platforms push back
With the enthusiastic support of the current US administration, the platforms have taken a more oppositional public line with the EU this year. Even in these largely conciliatory documents, some hints of this can be detected, when the tone switches from transparent or promotional to a tad more defensive. (The need to balance free expression with harm mitigation measures is discussed in many of the reports, but while this can be a defensive tack, it is also a priority for the DSA.)
Meta, while commenting on the paid, ad-free and less-personalized ads options now available to EU users, includes a strident (and unnecessary?) defense of their more-personalized default: “We firmly believe that personalised ads are a vital component of the ad-supported internet and we will continue to advocate for regulations that support the responsible use of personalised advertising, allowing us to maintain the high-quality, free services that people have come to expect from us.”
The report also hits a slightly frustrated note—potentially with some justification—when explaining that Meta has trouble balancing its dangerous organizations and individuals policy when countries do not see eye-to-eye on whose speech is illegal versus protected, “e.g., groups advocating for the independence or secession of a particular territory.” An undercurrent of defensiveness can also be detected in a discussion of CIB. Noting the cross-platform nature of the problems, the report notes: “[w]e’ve seen a number of influence operations shift much of their activity to platforms with fewer safeguards.” One final example is the comment that harmful content-detecting classifiers cannot be used in Facebook Groups “due to legal restrictions.” (While the law or laws in question are not mentioned, EU privacy laws seem likely candidates.)
TikTok describes its efforts to prevent election interference at length, including details on its mitigation efforts during the 2024-25 Romanian elections. This is very likely an attempt to defend itself against accusations that it was at fault in allowing Russian influence to propel a far-right candidate into a leading position in the first round of the presidential election in November 2024. The announcement of the subsequent European Commission investigation specifically cited TikTok’s 2023 and 2024 risk assessments.
LinkedIn, noting that the European Commission released guidance on responsibilities for protecting minors just as the report was being written, carefully lays out a justification for being less than comprehensive in its mitigations in this area—in brief, that it’s not really a platform for kids.
Google / YouTube largely steers clear of this tendency in their changes for this year, aside from some slightly beefed up statements on balancing enforcement with preserving free expression, taking into account “prevailing public interest in access to content” and avoiding “over-restrictive or improper content removals.” This includes citing a 2025 European Court of Human Rights decision in a case that Google brought against Russia over a fine for not enforcing regulator-demanded content takedowns.
X’s 2023 report employed consistently forceful rhetoric about “freedom of speech,” but that was already toned down for 2024, and no new material suggested this kind of pushback.
It seems unlikely that the changes in the reports from Meta, Google, and, to some extent, LinkedIn, that run counter to EU priorities will go unnoticed by EU regulators, who are surely looking closely at the reports—far more closely, one would think, than US politicians. Perhaps the platforms are taking a gamble that the sober European bureaucrats will be less moved by semi-public rhetoric than mercurial leadership on this side of the Atlantic—at least now that Thierry Breton has left the Commission. As noted above, systemic risk reports have already been cited in DSA investigations of platforms, and it will be interesting to see if these hints at alignment with positions that are favored in Washington and disfavored in Brussels will make an appearance in future regulatory statements from the Commission.
Authors

