Meta's Content Moderation Changes are Going to Have a Real World Impact. It's Not Going to be Good.
Dia Kayyali / Jan 9, 2025On Tuesday, Meta cofounder and CEO Mark Zuckerberg and newly-minted Chief Global Affairs Officer (and longtime conservative Republican operative) Joel Kaplan announced that Meta would be getting rid of its fact-checking program and pledged to work with President-elect Donald Trump to uphold free speech and fight "dangerous censorship" from platform regulation. Meta also published sweeping changes to its "Hateful Conduct" Community Standard. The policy, formerly known as the Hate Speech Community Standard, is now written to specifically allow more hateful content targeting transgender people, immigrants, and women–including cisgender women.
Every change Meta made on Tuesday, like everything it has ever done, was ultimately a business decision. But this may be its most cynical business move yet. The changes appear to cater specifically to the rise of far-right leaders and ideology globally, not just in the United States. Like Elon Musk’s transformation of X, the changes on Meta platforms will be noticeable, and they will be harmful not only to marginalized groups but also to public safety more broadly.
Why now: The geopolitics
Meta and Zuckerberg have declared war on the global movement to regulate platforms. The US has, unsurprisingly, been left behind in this movement, despite Zuckerberg's claim that "it's been so difficult over the past four years when even the US government has pushed for censorship." The US has, for example, failed to pass legislation related to meaningful transparency, such as the Platform Accountability and Transparency Act. Zuckerberg's announcement specifically called out Europe and seemed to equate the EU's willingness to deem some types of content illegal with China's systemic censorship regime.
There has indeed been a significant wave of platform regulation in recent years. It's also true that civil society (myself included) has critiqued the regulations, especially when they directly force platforms to moderate content in specific ways. However, even regulations with many problematic aspects, like the Online Safety Act, have some focus on ensuring transparency and access to appeals, and force large platforms to invest appropriately in content moderation. Europe’s Digital Services Act (DSA) and other regulatory transparency and appeals requirements are no doubt costing Meta a lot of money, so Meta’s return to its traditional anti-regulation stance is no surprise, especially given that many people believe Meta invested money in its own Oversight Board as a way to fend off further regulation.
What observers from the US may have failed to note is how this all overlaps with the rise of the far-right in Europe. Germany, in particular, is stunning in this regard. The fall of the German government barely made news in the US, but it's a big deal. Germany will hold snap elections at the end of February, and the far-right Alternative fur Deutschland (AfD) has made incredible gains in popularity and power in recent years. Germany has always played an outsized role in EU politics, particularly on matters related to online regulation, and the AfD has specifically railed against so-called censorship. In fact, President-elect Trump's patron, Elon Musk, is actively courting the far-right in Germany, which has responded gleefully by welcoming his support and calling him a savior of free speech.
What the AfD appears to mean by ‘free speech’ is protecting its ability to incite offline violence. Research from 7 years ago showed that AfD posts led to a predictable uptick in anti-refugee violence. The party is much more popular now, and there's no reason to think the problem has improved. This pattern is repeated elsewhere in Europe and abroad. For example, in November 2023, claims by far-right figures that the death of a teenager in a small village in France was committed by migrants led to serious street violence. And over the summer, false information that a Muslim immigrant had committed a mass stabbing in Southport, UK, directly fueled some of the most violent protests since 2011. Protestors set fire to alleged refugee accommodations, vandalized mosques, and even attacked police officers. Finally, President-elect Trump and Vice President-elect JD Vance's false claims that Haitian immigrants in Springfield, Ohio, were eating dogs and cats led to dozens of bomb threats, closures of government buildings, increased harassment against immigrants, and increased activity by hate groups like the Proud Boys.
Policy Changes
The specific changes to Meta's former hate speech policy deserve far more attention than they are receiving. The policy is now called "Hateful Conduct," and it is worth going to Meta's Transparency Center and clicking on the tracked changes version, dated January 7, which shows all the specific changes.
Notably, the policy will now allow calls for exclusion and "insulting language" towards trans people, women, and immigrants. This line demonstrates the length to which this allowance goes: "We do allow allegations of mental illness or abnormality when based on gender or sexual orientation." The policy now also specifically allows users to spread claims that vulnerable groups purposefully spread Covid and removed prohibitions against calling women property and non-binary people "it."
One change exempts from any protections "groups described as having carried out violent or sexual crimes or representing less than half of a group." This seems to mean that Meta would allow users to post direct incitement to violence in the form of claims of sexual crimes committed by immigrants or trans people. This change is significant because Meta surely knows that when it comes to forms of speech that incite violence, "none is more pervasive or powerful than telling people that someone is threatening their children."
Some commentators have opined that the enforcement changes noted above may be "[m]uch more consequential than the policy changes." Perhaps they think that the changes will only impact people personally impacted by violence related to immigration, gender identity, and gender. They are sorely mistaken. The topics Meta is loosening restrictions on are those most intimately tied to "lone wolf" shootings and threats of violence against women and trans people. Even if one doesn't buy the concept of stochastic terrorism–" the idea that influential individuals may demonize target groups or individuals, inspiring unknown actors to take up terroristic violence against them"–the connections between online content and offline violence are simply too numerous to ignore at this point. Even if Meta does continue to moderate terrorist content, it will now do a much better job of serving as a gateway to far more radical spaces, such as far-right Telegram groups.
In fact, Meta already allows most Great Replacement Theory (GRT) related content on its platform. This dangerous conspiracy theory claims that "there is a conscious effort to replace white populations through immigration, integration, abortion, and violence against white people." It has been central to motivations of the most high-profile mass killings in recent years, most notably the violent live-streamed massacre of 51 people at two mosques during prayers in Christchurch, New Zealand, in 2019. It has been featured in nearly every shooter's manifesto. Meta has, until now, very rapidly removed content related to mass shootings, but GRT content has been purposefully spread extensively, notably by far-right French politician Éric Zemmour.
The changes will also likely lead to an increase in incel (“involuntary celibates”) content. In fact, the new policy seems almost tailor-made to allow the "your body, my choice" content that has surged since the election and that has been linked to offline harassment as well. Cases of directly linked violence here are not immediately apparent, but in the seven years between 2015 and 2022, incels were responsible for at least 53 deaths, including multiple mass shootings.
Meta's changes for women, immigrants, and trans people now may not immediately impact everyday users. They may not ever impact you on the platform. But don't be lulled into a false sense of security if you don't fit these categories. These changes make everyone less safe.