To Evaluate Meta’s Shift, Focus on the Product Changes, Not the Moderation
Ravi Iyer / Jan 15, 2025Meta’s recent announcements include two changes in its moderation efforts and one change to its algorithm design. The algorithm design changes are the least clear and likely the most impactful, writes Ravi Iyer.
Meta recently announced three changes to allow “more speech and fewer mistakes,” as founder and CEO Mark Zuckerberg put it. The company is ending its third-party fact-checking program in favor of community notes and reducing its content moderation efforts more broadly. And it is making algorithmic changes to discourse on certain topics, such as discussions of gender, politics, and other social issues.
Fact-checking was never done at a substantial enough scale to materially affect online discourse. One estimate of fact-checking done in January 2020 found that there were only 302 fact-checks conducted that month in the US, which is clearly not meaningful given the hundreds of millions of US users on Meta platforms. Fact-checking is even sparser internationally. And while those 302 pieces of content may have been important, it is unlikely that fact checks happened with enough speed to prevent any harm, given how fast false news spreads. This impact also has to be weighed against the inevitable mistakes that will be made in any such process. Such mistakes have led to a clear backlash whereby most Americans of all political persuasions believe that social media sites censor political views, and few have much confidence in companies' abilities to discern truth from falsehood.
These same issues apply to moderation more broadly, which still involves fallible human judgment of content, outsourced to individuals with far less expertise and time to make the right decision, as compared to fact-checking organizations. As the journalist Casey Newton writes in one of many reactions that are primarily focused on moderation and fact-checking, “Most people I know have been thrown into “Facebook jail” once or twice, typically for something innocuous.” Hateful speech online is undoubtedly a problem, but the solution of making rules about what people can and cannot say has led to the rise of numerous influencers who often target young men online with messages of how they are being censored as part of a broader narrative of how they are being oppressed. Moderation has not solved our problems with online discourse, and while it is essential for a limited set of harms, it has also often made things worse.
Note that this can be true even as, at the same time, moderators do important, heroic work. There are indeed individual pieces of content that are harmful, and that should be removed. The problem is believing that this system can scale to the size of platforms and solve thorny issues like hateful speech or misinformation without an unacceptable amount of bias, mistakes, and controversy.
The Pivot to Design
Many platforms have long realized the problems inherent in moderation and, in times of crises, have developed “break the glass” measures, which are often temporary design changes that reduce the risk in an ecosystem without making judgments about what people can say. They often involve adding more friction, more privacy, or reducing engagement incentives. Some have been adopted permanently, such as the removal of the incentive for commenting and sharing for specific topics or the introduction of greater privacy and reduced engagement-based recommendations within certain topics for teens.
The announcement that Meta would be changing their approach to political content and discussions of gender is concerning, though it is unclear exactly what those changes are. Given that many product changes regarding those content areas were used in high-risk settings, a change intended to allay US free speech concerns could lead to violence incitement elsewhere. For example, per this post from Meta, reducing “content that has been shared by a chain of two or more people” was a content-neutral product change done to protect people in Ethiopia, where algorithms have been implicated in the spread of ethnic violence. A similar change – removing optimizations for reshared content – was discussed in this post concerning reductions in political content. Will those changes be undone? Globally? Such changes could also lead to increased amplification of attention getting discussions of gender. Per this report from Equimundo and Futures Without Violence, 40% of young men trust at least one “manosphere” influencer – who often exploit algorithmic incentives by posting increasingly extreme, attention-getting mixes of ideas about self-improvement, aggression, and traditional gender roles.
Does Meta’s change mean that it is going to undo the progress they had made in removing engagement optimization for content areas that they know they are harmful for? Including for youth? We know the harm that arises if people searching for health content get results that are optimized for clicks. If I were a regulator or journalist, these are the questions I’d be focusing on – not whether a tiny group of fact-checkers would continue their effort to try to police billions of users.
There are glimmers of hope in regulating algorithmic design. More and more jurisdictions are attempting to regulate algorithms, and both California and New York passed laws last year regulating the use of personal data in algorithms for minors. One positive change happened last week when a court upheld most of California’s law in a nuanced decision that recognized that optimizing for engagement within algorithms is not an exercise in free expression. Since no message is intended to be conveyed by platforms, and the goal is merely to increase the usage of the product, engagement-based algorithms are a product function that is regulatable in the same way that we regulate food and car safety.
More and more jurisdictions are focusing on product design in 2025, both out of a desire to protect free speech and out of a desire to more robustly protect users from the design decisions that allow a small number of hyper-active online influencers to dominate online discourse and harm both our kids and our democracy. Let’s not get distracted by unwinnable arguments about what content should or should not be allowed online. Instead, let’s focus on the product changes.