Home

Most Comments Deleted From Social Media Platforms in Germany, France, and Sweden Were Legal Speech — Why That Should Raise Concerns for Free Expression Online

Jacob Mchangama / Jun 24, 2024

In the age of ubiquitous social media, the power to shape public discourse lies in the hands of a few digital giants. Yet, recent European regulations intended to curb "torrents of hate" online could be stifling free expression. As policymakers tout these measures as necessary for a safer internet, a critical question emerges—is legally permissible speech being removed from social media platforms?

The Growing Regulatory Web in Europe: The Digital Services Act and NetzDG

European digital regulations, particularly Germany's Network Enforcement Act (NetzDG) and the European Union's Digital Services Act (DSA), were conceived to tackle the proliferation of illegal content online. The NetzDG, enacted in 2017, required social media platforms to promptly remove illegal content such as defamation and hate speech or face substantial fines. That law is now being repealed and superseded by the DSA.

In 2018, President Emmanuel Macron warned about “torrents of hate coming over the Internet.” European Union Commissioner and digital enforcer Thierry Breton asserted in 2020 that “the Internet cannot remain a ‘Wild West.’” The DSA, which became fully applicable in February 2024, seeks to ensure a "safe, predictable, and trusted online environment." In 2023, both Breton and Macron raised the possibility of using the DSA to shut down social media platforms during periods of civil unrest. Fortunately, civil society organizations swiftly rebuked this suggestion.

Just this month, EU President Ursula von Leyen warned that the “core tenets of our democracy” were under threat when she unveiled plans to establish a “European Democracy Shield” to counter disinformation and foreign interference online. This would, no doubt, expand the power of the DSA to regulate broader forms of speech on the Internet.

The transformation of the DSA into a tool for broader regulation of Internet speech, including threats of wholesale shutdowns, requires civil society to critically evaluate and examine the underlying rationale for these regulations and their impacts on online discourse.

Examining Content Removals in France, Germany, and Sweden

In a new report published by The Future of Free Speech, we sought to determine whether the underlying premise of these laws was true: Are social media platforms overrun with illegal content? And if so, how are platforms and users moderating that content in response to existing digital regulations?

According to our report, a staggering majority of the content removed from platforms like Facebook and YouTube in France, Germany, and Sweden was legally permissible. For example, our study examined deleted comments from 60 of the largest Facebook pages and YouTube channels in France, Germany, and Sweden, revealing that, depending on the platform and country, between 87.5% and 99.7% of the removed content was legal.

While the DSA wasn’t in full force during the time period of our analysis, every country we examined had applicable laws for determining illegal speech, with Germany’s NetzDG having the most consequences for content moderation. Our findings suggest that NetzDG's impact is particularly pronounced in Germany, where 99.7% of the deleted comments on Facebook and 98.9% on YouTube were legally permissible. This suggests that platforms are excessively cautious, possibly due to the hefty fines imposed for non-compliance.

Although the study could not determine whether these deleted comments were removed by the platforms, channel administrators, or users, reports released by Meta suggest it takes a high percentage of content moderation actions prior to the content being reported by a user. For example, in reporting on its enforcement of community standards from Jan. to Mar. 2024, Meta provided that, of the violating content it actioned for hate speech, nearly 95% was found by the company, and just 5% was reported by users.

If companies, pages, and channels are overcorrecting in response to sweeping digital regulations by removing legally permissible content to comply and avoid excessive fines, this could have a major chilling effect on free speech online.

The Cost of Overzealous Moderation

The repercussions of this over-removal are far-reaching. Removing legally permissible speech to comply with digital regulations undermines the fundamental right to free expression and erodes public trust in social media platforms as venues for open discourse. For example, general expressions of opinion, which make up the bulk of the deleted comments, often do not violate any laws or community standards. These statements did not contain linguistic attacks, hate speech, or illegal content—things like expressing support for a controversial candidate. Yet, we found that over 56% of comments removed fell into this category. This indicates a troubling trend where platforms are sacrificing free speech at the altar of over-cautious moderation.

Over-moderation could result from attempts to avoid excessive fines from current regulations, the significant increases in the scope of platforms’ own hate speech policies, or cultural pressures from civil society and the media. Platforms may also instinctively adopt restrictive moderation policies to protect their own reputations or avoid being associated with controversial content.

Whether page administrators, channels, or the platforms themselves are removing content, our findings show that these factors are having a chilling effect on these sites, further diminishing the diversity of viewpoints essential for a vibrant democratic society.

European policymakers have claimed that social media platforms are awash with illegal hate speech to justify the need for drastic online regulation like the DSA. Our report provides empirical evidence showing that only a small percentage of comments being removed from platforms are illegal. These findings highlight a pressing need for a stronger emphasis on protecting free expression and access to information in digital regulations and content moderation policies.

Respecting Fundamental Rights to Free Expression

Policymakers and content moderators must recognize that overly stringent regulations can backfire, suppressing legal speech and undermining the principles of free expression. Such restrictive policies can also lead to the removal of content from the very minority voices they are intended to protect.

Instead, we need to create a digital environment where a variety of perspectives can exist, while also implementing measures to moderate truly harmful content without limiting legal political discussions, even if they include controversial or offensive ideas. Regulators and platforms should refine and narrowly tailor the criteria for content removal to ensure more precise targeting of genuinely harmful content.

To its credit, the DSA requires platforms to provide transparent content moderation guidelines and more robust appeals processes for users. This could help mitigate the chilling effect and restore public trust in social media as a space for free and open dialogue. European policymakers should further reassess the impact of current digital regulations and recognize the unintended consequences that might play out as the DSA goes into full force.

The findings of this report serve as a stark reminder that in the quest for a safer online environment, we must not lose sight of fundamental human rights to free expression. While platforms are not beholden to international human rights law, policymakers, platforms, and civil society should ensure that content moderation policies protect users without silencing the voices that make our democracies strong.

Authors

Jacob Mchangama
Jacob Mchangama is the Executive Director of The Future of Free Speech and a research professor at Vanderbilt University. He is also a Senior Fellow at The Foundation for Individual Rights and Expression (FIRE) and the author of Free Speech: A History From Socrates to Social Media.

Topics