Home

Scholars Reckon with Democracy and Social Media

Richard Reisman / Aug 9, 2021

It is apparent that social media has perturbed the way we relate to one another and the way that we arrive- or fail to arrive- at consensus. This is a wicked problem, and there is little agreement on how to solve it. Many of the most discussed solutions- such as the current fashion in Congress to propose poorly wrought reforms to Section 230 of the Communications Decency Act or to focus on antitrust enforcement- are likely to be ineffective, politically stymied, or unconstitutional. There is an urgent need to sketch out a path forward.

A notable, if wonkish, contribution on this debate comes from the academic Journal of Democracy. Billed as “the world’s leading publication on the theory and practice of democracy,” the Journal’s editorial board includes Francis Fukuyama, the distinguished Stanford political scientist. In April of this year, Fukuyama penned an essay titled Making the Internet Safe for Democracy that kicked off a round of contributions on the subject of how to reconcile social media and democracy.

In his April essay, Fukuyama focused on the “grave threat to democracy” that the dominant Internet platforms represent because of their unchecked power to “amplify or silence certain voices at a scale that can alter major political outcomes.” He sees the current approaches to reducing that power as “inadequate,” and critiques four categories of other remedies that are often suggested.

  1. Antitrust breakups: directed to economic harms, not harms to discourse, and countered by network effects that could enable a “baby Facebook” to quickly grow to the size of the parent.
  2. Government regulation of content: prone to failure because the US is now “far too polarized…to determine what is ‘fair and balanced.’”
  3. Data portabilityto enable switching to competitive platforms: prone to “difficulties involving both property rights and technical feasibility” relating to data that are “hugely heterogeneous and platform specific.”
  4. Privacy laws to limit use of personal data: experience with GDPR shows such regulations to be hard to enforce, plus the incumbents already have huge databases.

Instead of such remedies, he proposes “using both technology and regulation to outsource content curation from the dominant platforms to a competitive layer of ‘middleware companies.’” He aims at the undemocratic lack of legitimacy in the unprecedented power of private platforms to unilaterally amplify or silence certain voices -- on the basis that “no democracy can rely on the good intentions of particular powerholders.”

The idea is to require the use of “middleware” software that works with a platform so that an intermediary service can “filter platform content not just to label but to eliminate items deemed false or misleading, or could certify the accuracy of particular data sources.” This could enable one or more services like NewsGuard (or any other brand a person may prefer and trust) to plug directly into the platforms. “Middleware could reduce the platforms' power by taking away their ability to curate content…outsourcing this function to a wide variety of competitive firms which in effect would provide filters that would be tailorable by individual users.”

He anticipates the objection that this might reinforce “filter bubbles” as being misdirected, on the basis that suppressing harmful but legal content is not a proper role for policy “in a society that values free speech,” -- and that it would be technically very difficult. He agrees that there will be issues of getting government regulation to make this happen and to establish rules for interfaces and for revenue-sharing mandates to fund the middleware services. Despite these issues, he argues that there is no other practical way to limit this threat to democracy from excessive platform control over speech.

This proposal is an outgrowth of a Stanford Working Group on Platform Scale. Fukuyama also published on these ideas with co-authors in Foreign Affairs and the Wall Street Journal. He cites precedent for this proposal in Twitter CEO Jack Dorsey’s Bluesky project, one of several initiatives to imagine a social media ecosystem built on an open, interoperable protocol rather than closed platforms like Facebook.

Following Fukuyama’s contribution, in July the Journal of Democracy published a series of essays all headed under the theme The Future of Platform Power. These essays critique and build on Fukuyama’s proposal, and are summarized below:

Quarantining Misinformation by Robert Faris and Joan Donovan

Faris and Donovan, both researchers at Harvard’s Shorenstein Center on Media, Politics, and Public Policy, “support exploring new approaches that promote greater user control and autonomy on social media platforms but take issue with his narrow definition of the problem,” which they view as understating the impact of bad actors to manipulate debate and suppress participation. They also suggest that “more technology cannot solve the problem of misinformation-at-scale” -- and may be co-opted by the same bad actors. They say that “[w]hole-of-society problems will ultimately require whole-of-society thinking and action.”

The authors support Fukuyama’s skepticism of antitrust solutions, seeing the growing consensus that the dominant platforms are not serving the public interest as “no more than a veneer covering profound political disagreement” -- but note that Fukuyama “sides with the conservatives” on the idea that the platforms should not moderate political speech.

Faris and Donovan agree that “allowing users to choose outside algorithms to curate their information feeds…does have considerable allure.” They propose that a “such a system could tap into the underutilized expertise of librarians in sorting out knowledge from information,” and that “[h]iring thousands of librarians could form the core of a middleware industry that ensures that a semblance of the truth still circulates.” They see “an institutionalist argument that we need gatekeepers for the marketplace of ideas,” but agree it is not clear that the platforms should have that role, given their business model incentives and sensitivity to political and social pressure.

They agree that the current system is not up to the challenges, but are hesitant to pare it back “before better alternatives can be identified” -- suggesting limited experimentation. They are concerned that this “fragmentation by design” would work against a more unified public sphere and lead to more polarization (the filter bubble problem).

They conclude the “oldest and most profound challenges of democracy” would still leave much to be done, and in the immediate term we must demand that the platforms do more to address manipulation and abuse.

Fixing The Business Model by Nathalie Maréchal

Maréchal, a researcher and senior policy analyst at the tech watchdog organization Ranking Digital Rights, targets the business models of today’s platforms as the key problem, including their reliance on “surveillance capitalism,” a “blind faith in the invisible hand of the market,” and “technosolutionism.” Where she disagrees with Fukuyama is “on how to address these dynamics.” He “focuses exclusively on user-generated content while ignoring the moderation and targeting of ads” while Marechal suggests “the opposite approach: Fix how platforms govern the content and targeting of ads, and the rest will follow.” In this diagnosis, the targeted advertising business model is fundamentally incompatible with democratic outcomes.

She agrees on the need for “disrupting the connection between income-generation (in this case, advertising) and editorial functions, much as traditional media organizations” do – but disagrees with the idea that turning to market competition is the answer. Her concern is that self-selected filter bubbles would be just as bad, and Trust and Safety teams would also be fragmented.

The bigger problem she sees is that there is no clear business model for the middleware filtering services, so that the revenue sharing that Fukuyama proposes would maintain the incentive for engagement over meaning. Alternatively, if they were funded by government or foundations, they would reflect those funders’ priorities -- and she doubts that funding by users themselves can be achieved.

Instead Maréchal says we “are facing a multifaceted problem and privacy legislation alone is no panacea, though I believe it is the pi`ece de résistance.” She advocates “data-minimization” as a way to “steer the ad sector back to the contextual-advertising paradigm.” She also favors broader use of intermediary liability for advertising content and targeting. (Maréchal refers to several of the similar proposals, including my Tech Policy Press article.)

Reining In Big Tech by Dipayan Ghosh and Ramesh Srinivasan

Harvard’s Ghosh and UCLA’s Srinivasan contend that the harms of social media “cannot be addressed merely by Fukuyama's proposed policy requirements for sourcing of content moderation to middleware providers.” They seek to “clarify the connection between the business model…and the negative externalities.” They advocate “a more holistic view of the policy intervention required to renegotiate the balance of power between corporate and consumer—framed by consumer privacy, algorithmic transparency, and digital competition.”

Here again, the perversity of the engagement-driven advertising business model is seen as central, and as not addressed by Fukuyama’s unbundling. The authors see the need for “regulating all businesses concerned” in ways that “go beyond what Fukuyama proposes” to “restore democratic health” by limiting the harms of marketplaces “primarily oriented toward profit—rather than the public interest.”

Again, privacy is seen as the most urgent need and “a legal foundation for users to gain ownership and sovereignty over our personal data” to counter “a digital economy that commoditizes consumers' experience of media and treats their attention along with their data as currency.” Ghosh and Srinivasan also suggest that the dominant platforms “are indeed natural monopolies” that should be regulated as utilities.

Making Middleware Work by Daphne Keller

Stanford’s Keller brings a unique perspective as a former Associate General Counsel at Google with responsibility for intermediary liability issues. She is “very much a fan” of the Fukuyama proposal. “Unlike many other proposals to curtail platform power, middleware does not violate the First Amendment of the U.S. Constitution. In the United States, that makes middleware a path forward in a neighborhood full of dead ends. Before we can execute on the middleware vision, however, at least four problems must be solved.”

She likes it as “broadly analogous to the unbundling requirements for telecommunications providers…bringing competition into markets that are subject to network effects.” More than just competition, she cites the Stanford group “comparing platforms’ control over public discourse to a loaded gun” and “the question for American democracy, is whether it is safe to leave the gun on the table.”

She suggests “[m]iddleware would reduce both platforms' own power and their function as levers for unaccountable state power, as governments increasingly pressure platforms to "voluntarily" suppress disfavored speech. … Unlike many other proposals to curtail platform power, it does not violate the First Amendment. That makes middleware a path forward in a neighborhood full of dead ends.” That includes many proposals from the political left to force takedowns of dangerous user speech, and from the right to strip platforms of editorial control over what they choose not to carry.

Her caveat is in four challenges that must be addressed (claiming limited expertise on the first two, but some on the last two):

  1. Technological feasibility: How can competitors remotely process massive amounts of platform data? She points to the Twitter initiative on this, and to Stephen Wolfram’s suggestions to Congress on how this might be done, saying she is “a cautious optimist.”
  2. Business model: “How is everyone going to get paid?” requiring new models or sharing of ad revenue.
  3. Curation costs: Facebook uses tens of thousands of people to moderate content in dozens of languages plus costly machine learning tools. The theory is that independent filtering services can limit their scope and/or share underlying services, to just focus on the unique judgment of user-relevance and value. But reliance on shared services could “reimpose the very speech monoculture that middleware was supposed to save us.” Keller also recounts the failure of the Internet Content Rating Association (ICRA) to attract third-party curators a couple decades ago as a sobering example, but notes that there was no revenue model.
  4. Getting privacy right: The Cambridge Analytica scandal highlights the sensitivity of data not only for individual users, but for use of data from their friends. She notes that Fukuyama proposes the filtering services not see privately shared data from friends, but is concerned that would hobble such services when friends share misinformation. Getting friends’ consent seems unwieldy. There may be technical solutions, but she has not been convinced.

She concludes with reference to a Venn diagram with interlocking circles for privacy, speech, and competition, with interoperability (including for content moderation) in the middle. “This is what makes it hard. We have to solve problems in all those areas to make middleware work. But this is also what makes the concept so promising. If—or when—we do manage to meet this many-sided challenge, we will unlock something powerful.”

Solving for a Moving Target by Francis Fukuyama

Fukuyama’s response is that his group’s proposal is aimed at “reducing these platforms' power over political speech” and rests on a normative view about the continuing importance of freedom of speech. He notes that “three of our critics do not take into account…the illegitimacy of using either public or private power to suppress this hazard. …Middleware is the most politically realistic way forward.”

He credits the criticisms as “uniformly thoughtful” and says his work group is continuing to develop the idea and hopes to have a public demonstration prototype by the end of the year.

Fukuyama does not dismiss the threat that his unbundling might not stem the flow of toxic content or the ability of bad actors to exploit the openness of these forums. But he argues that the legitimate object of public policy is not to eliminate that content but to limit its artificial amplification. “The marketplace of ideas has indeed failed…because the big platforms have the power to distort markets in unprecedented ways.”

To the criticism that he is siding with conservatives, he counters that “the underlying power of these private, for-profit platforms to silence a major voice in U.S. politics should be troubling to any supporter of liberal democracy.” His group rejects ideas like the FCC’s old Fairness Doctrine as politically unrealistic given the level of polarization in the U.S. He argues that his proposals are politically neutral, avoiding the fraught issues of state regulation or modifications to Section 230.

To GDPR-like privacy regulations as an alternative, he agrees that privacy abuse creates competitive advantage to the platforms, but fears that restrictions may not “erode the power of the big platforms to amplify or suppress political messages” -- and future restriction may benefit the incumbents.

Fukuyama reports two areas where his group’s ideas are evolving. One is to favor what they called the lighter version of middleware that leaves some of the some of the most complex and costly- but less politically sensitive- moderation functions with the platforms, namely the “moderation of nonpolitical content, pornography, graphic violence, criminal incitement, and the like.” He also supports a new focus on the handling of takedowns, especially political takedowns in other countries in response to authoritarian demands. The question of whether filtering services are empowered to take down content or simply to label it is left for further consideration. He agrees that Keller “points to some real difficulties” but “we will not know if they are solvable unless we try.

Where next?

Few in industry seem to be looking beyond the short-term mess we are in to find ways for social media platforms not simply to limit harms to democracy, but to positively shape and support democratic societies over coming decades. Without such a guiding vision, it is hard to know a good solution from a bad one.

There is no silver bullet. But I contend that the unbundling of content moderation and filtering from the network layer of social media platforms, much as the Fukuyama group has proposed, is the one bullet that has a silver jacket. Some form of this unbundling will represent the essential leverage point for managing speech on digital networks in a way that preserves democratic freedoms. Unbundling can enable a continuation into the digital realm of the robust flow of ideas -- through an ecosystem of communities and institutions that support and mediate that -- that has been central to democracy for centuries.

But to get there, the debate must move out of academic journals and into the broader community- including investors, entrepreneurs, executives, institutions, policymakers, and, indeed, users. Today’s social media industry disintermediated much of our information ecology with no thought to what it was breaking. It will require all of society to build pro-democratic platforms that integrate into our epistemic social ecology in a way that can stand the test of time.

Reisman builds on this debate -- adding some forward-looking ideas -- in a companion piece, Unbundling Social Media: A Taxonomy of Problem Areas.

Running updates and additional commentary on these important issues can be found on Reisman’s blog.

Authors

Richard Reisman
Richard Reisman (@rreisman) is a non-resident senior fellow at the Foundation for American Innovation and a frequent contributor to Tech Policy Press. He blogs on human-centered digital services and related tech policy at SmartlyIntertwingled.com, and his work was cited in a Federal Trade Commission...

Topics