Home

Right, Capacity and Will in Content Moderation: A Case for User Empowerment

Hiromitsu Higashi / Jan 19, 2023

Hiromitsu Higashi is an MA candidate at Johns Hopkins SAIS and a Google Public Policy Fellow at the R Street Institute. His research focuses on internet governance. The views expressed in this essay do not reflect those of Google or R Street.

Content moderation is perhaps the thorniest issue in internet governance. Since August, social platforms such as Facebook, YouTube and TikTok have seen successive shocks—from the failure to curb Brazilian election disinformation to the passage of Texas H.B. 20—each with profound techno-political implications. The root of the problem lies with social media’s limited capacity to scale consistent moderation practices and their reluctance to assume the burden in full. A sober reassessment of these elements from the platforms’ perspective points to a logical endpoint: the transfer of moderation power, at least in part, from the platforms back to the users.

In the United States, content moderation is now a focal point for both sides of the political aisle, though for different reasons. Over the years, charges brought by lawmakers have included companies’ insufficient effort to combat mis/dis-information and platform manipulation; failure to protect users from harassment and harmful content; concerns (some now diminished) over the creation of echo chambers; alleged ideological bias against conservatives; and the over-concentration of unchecked political power that threatens democratic institutions via opaque algorithms. Big tech critics have campaigned for measures as far ranging as antitrust reforms, data portability, end-user education, federal regulatory sandboxes and direct legislative intervention in online content. Nevertheless, points of contention are easily seen in the technical complexity of the implementation of reforms, what some regard as a dangerous confluence of socio-political and economic agendas in the employment of antitrust tools, and a potential clash with American constitutional ideals around free speech (an issue especially salient given the backdrop of the nationwide ideological divide). In practice, the lack of progress in content moderation since 2016 offers no cause for celebration.

As opposed to questioning what different entities in the digital sphere should do on content moderation, perhaps the line of inquiry should be directed to what social media platforms have the right to do, what they can do and what they are willing to do. This prompts us to rethink the three aspects of platform companies’ content moderation efforts: right, capacity and will. “Right” refers to the inherent rights that platforms have to engage in content moderation. “Capacity” refers to the ability to scale content moderation in line with the harms. And “Will” refers to the desire of platform executives to engage with these issues.

Right

Social media platforms are private, profit-driven commercial entities, and under the First Amendment, they have every right to shape their online speech environments. Some have argued that platforms are modern “public squares” and “analogous to a governmental actor” that should be barred from making rules on speech—against which some scholars have offered robust counterarguments. Some have pointed out the flawed analogy between online platforms and physical public squares, as well as the discriminative nature of the latter in American history. Others highlight that platforms’ deployment of automated and manual tools to host, sort and withdraw content amounts to the exercise of editorial judgment, which is protected under the First Amendment and Section 230 of the Communications Decency Act.

Platforms’ responsibility to defend free expression or to develop oversight restraint to the exercise of their oversized power on users’ information is mostly self-imposed, stemming from market and public pressure, or moral rather than legal grounds. Online platforms seek to operate as vehicles for the free, safe and instantaneous exchange of information, free from geographical barriers and unjust censorship, and that serves as the foundation of customer trust and loyalty for platform operators.

Capacity

Content issues can be divided into two categories with some overlaps: harmful content (e.g., explicitly illegal content, hate speech, graphic content, copyright violations) and mis/disinformation. One of these requires the ability to make judgment calls on what is unacceptable material or behavior on a platform; the other, to distinguish truth from falsehood. Despite various efforts, platforms have so far seen an uphill battle against the prevalence of harmful content due to sheer scale, lack of transparency in rule enforcement, unreliable AI-enabled moderation systems, vagueness in self-crafted community guidelines, and the many posts with contexts far too ambiguous even for human judgment—not to mention the ideologically-charged question of who gets to define hate speech.

The matter only gets worse in the fight against mis/disinformation. Combating false information demands significant investment in fact-checking, an exceptionally time-consuming and labor-intensive endeavor even for the dedicated fact-checkers to whom social platforms outsource the task. The prospect is further dimmed by the involvement of state-backed actors, how fast falsehoods spread, and the trust deficit between governments and tech companies. Moreover, for platforms that operate on a global scale with hundreds of millions of posts per day, effective content moderation requires a deep understanding of factors that shape users’ perception of online speech, notably local languages, humor, traditions and political cultures. Both in theory and in practice, the platforms simply do not possess the scaling capacity to keep pace with the complexity or volume of content moderation challenges.

Will

When platforms do engage in content moderation, they never want to be the only entity responsible for it. Intentions are hard to measure, but platform companies are commercial enterprises, not public services; they want to win over the world through technological innovation, not politics. What they hope to achieve, however, is a set of more-often-than-not competing objectives: the corporate goal of maximizing revenue, user base and market size; the instinctive preference to locate resources to engineers over lawyers; various legal obligations to protect users from online harm; and the moral urge to defend democracy, free expression and human rights.

It is extremely difficult, if not impossible, to craft a content policy that consistently hits all the targets. A misstep in moderation risks costing platforms politically, or sparking an exodus of users. The complex nature of ‘will,’ stemming from the struggle to balance financial, moral, legal and socio-political objectives, is reflected by the companies’ attempt to shed their political responsibility, evidenced for example by Meta founder and CEO Mark Zuckerberg’s opinion piece that calls for third-party standard-setting for content moderation, and Meta President for Global Affairs Nick Clegg’s op-ed urging Section 230 reform. Content moderation may be the one domain where platform companies actually welcome regulation.

The User Role

Since platforms have the right to moderate content, but lack the capacity to scale their practices or the will to assume the responsibility in full, and the appropriate scope of government intervention remains contested, then decentralization is another viable solution. To decentralize content moderation is to empower users to determine what appears in their feed and in what order. To some extent, this is already happening: when a user blocks an account, they are technically applying a customized filter to their feed under which that account will never show up again. The simple transfer of responsibility to individual users, however, is unduly burdensome, since users are vulnerable to and exhausted by the overwhelming barrage of information. Effective, appropriate decentralization practices should hence delegate moderation power to a group of users with the resources and subject matter expertise necessary to undertake this consequential task. Examples of these practices include “critical community” engagement, middleware, Twitter’s Bluesky initiative and the World Wide Web Consortium’s ActivityPub.

The internet’s founders and pioneers once firmly rejected any form of top-down governance and entrusted the cause to users’ capacity to self-regulate. It might be too soon to declare that libertarian vision of internet governance dead, not least the wisdom of decentralized control. Instead of assuming the ever-contentious role of arbiter of speech and truth, platforms that choose to decentralize or delegate significant content moderation responsibility will leave users empowered, and perhaps remove government from the equation. Recent events are yet another wake-up call for social platforms to put this idea into action.

Authors

Hiromitsu Higashi
Hiromitsu Higashi is an MA candidate at Johns Hopkins SAIS and a Google Public Policy Fellow at the R Street Institute. His research focuses on internet governance. The views expressed in this article do not reflect those of Google or R Street.

Topics