Progress Toward Re-Architecting Social Media to Serve Society
Richard Reisman / Dec 1, 2021A family of radical proposals for better managing social media to serve users and society is gaining interest. A recent Stanford HAI Conference session generated further productive dialogue on ideas for unbundling social media content moderation and filtering to “middleware” providers that operate independently from platforms. The discussion complemented an earlier debate at a Tech Policy Press event, Reconciling Social Media and Democracy. As I previously summarized, some experts are cautiously enthusiastic about middleware proposals, while others see merit but fear complications relating to speech, business models, privacy, competition and interoperability, and technological feasibility.
At the HAI session, which was moderated by Renée DiResta, Stanford professor Francis Fukuyama explained a middleware proposal from the Stanford Working Group on Platform Scale and how it has evolved, and his colleague Ashish Goel provided more technical detail and a concept demonstration.The driver of Fukuyama’s proposal is not only that the social media platforms are recklessly harming society, but that their control over speech threatens the very core of democracy. The dominant platforms filter what content each of us sees or does not see in our social media feeds, giving them too much power over the public sphere. The proposed remedy is to shift that power to users – supported by a diversity of independent filtering services working as agents for groups of users -- who can select among those filtering services in an open market.
The Stanford HAI Panel. Source
The Stanford Working Group has faced concerns that if personal data about users that the platforms currently use in filtering were provided to many filtering services with varying levels of resources, then protecting privacy would prove difficult. Fukuyama said that led them to scale back the proposed scope of filtering services to reduce the need for such data. The current suggestion is to simply provide input to the platforms that they could use to do filtered presentations in accord with the services that individual users select. This more limited approach is also motivated by the expediency to be relatively unthreatening to the platforms and their business models.
Goel walked through a demo of a Chrome browser app for Twitter that was limited to labeling of questionable content based on user activation of one of several pre-selected labeling services. Clicking on one caused labels from that service to appear at the top of respective content items in the otherwise unchanged Twitter news feed. A label from one filtering service might indicate that the item “needs context / claims unverified,” while a stronger label from a more critical service might say “misleading” and remove the body of the item from view (but might offer the user a button to view it). Fukuyama and Goel explained that they hoped to extend this to also enable the filtering services to do ranking or scoring of items.
Other very basic examples of middleware- from prototypes to functioning products- are beginning to emerge. Fukuyama and Goel pointed to an independent “middleware” product offering they happened upon a month earlier, called Preamble. This promised a service for Twitter to adjust rankings in accord with a user selection of “values providers.” Other notable basic examples include Ethan Zuckerman’s Gobo demonstration system, which also did rankings for Twitter, and Block Party, a startup company that filters Twitter feeds to limit harassment. The panel displayed a short video from Jonathan Zittrain to illustrate how sliders could make direct control by users reasonably simple.
Despite enthusiasm for such examples, the feasibility of radical reform was a recurring concern throughout the discussion.What would compel the platforms to accept the changes necessary to make middleware work? Is there any chance legislators can agree on any of the deep reforms necessary? Fukuyama and Goel seem to be hedging on this – backing off on bolder ambitions for unbundling for now, while arguing for a new expert regulatory agency that can mandate and overcome platform resistance in the future.
Another panelist, Katrina Ligett of Hebrew University, reinforced the need for filters to be bolder, to consider not only content items, but the social flow graph of how content moves through the network and draws telling reactions from users. Ligett made a connection to the emerging approach of data cooperatives that would intervene between a platform and the user on usage of personal data, again as the user’s agent, as being another kind of middleware. She also emphasized that some aspects of so-called personal data- such as this social flow graph data- are really pooled and collective, with a value greater than the sum of its parts
The direct focus of filtering against harm is the personalization and tailoring of what Ligett called “the incoming vector” from the platform to us – but driving those harms are how the platforms learn from the patterns in “the outgoing vector” of content and reactions from us. Unlike putting harmful content on a billboard, the platforms learn how to make it effective by feeding it to those most susceptible, when they are most susceptible. Ligett argued that interventions must benefit from a collective perspective. This is how social mediation can enter the digital realm, providing a digital counterpart to traditional mediation processes.
University of Washington researcher Kate Starbird made related points, reinforcing that theme that middleware solutions do not get to “the heart of the problem…how information gets to us …algorithms and structure.” Starbird suggested that toxicities on social media are not primarily related to individual pieces of content that can be labelled, but rather to the algorithms that amplify and recommend, creating influence across networks. She also noted that studies suggest labelling is ineffective, and that labels can trigger reactions that increase polarization.
Starbird also addressed concerns that a diversity of filtering services might do little to break up filter-bubbles or echo-chambers -- and might even make them worse. Fukuyama agreed that this is a risk, but one that is always present due to American society’s strong reliance on the First Amendment, with its foundation in “Eighteenth Century notions” of freedom and democracy. He mused about possibly revisiting the First Amendment, but discounted it due to its obvious political challenges.
Stepping back, the dialogue at the panel on whether variations on these proposals go too far or not far enough suggests that policy makers would benefit from more clarity on issues and solutions for unbundling along two dimensions-- scope and phasing:
- Scope: criticisms of unbundling suggest they do not address harmful speech directly enough. To that, the advocates agree these measures have limits -- and support that other remedies should also be applied to the extent they are not censorious. Unbundling is no panacea, but is an essential way to limit the grave risks of platform power -- and can facilitate targeting of other measures.
- Phasing: it is not yet clear how unbundling can be best made to work as imagined. That will take time to work through. The advocates agree this will be challenging, but argue that democracy requires that we rise to the need and work through the issues in phases. We need to apply care, discipline, and transparency to experiment, learn, and evolve -- testing changes in contained contexts and evaluating them before rolling them out to billions of users.
Unbundling might gain broader support by defining distinct phases of successively increasing scope- and clarifying how they mesh with other complementary measures. This might start, as Fukuyama now suggests, with labeling, then scoring and ranking. It might then build, as the others suggest, toward integration with data cooperative services- to enable more powerful data-based filtering that can meet bolder objectives while protecting privacy and addressing other concerns. My own suggestions for building on this model are outlined in a blog post, Directions Toward Re-Architecting Social Media to Serve Society.
Advancing these ideas is important. Over centuries, society evolved an ecosystem of publishers, interest groups, and other institutions to collaborate in curating human discourse toward a shared reality. A similar, decentralized digital ecosystem could further advance human discourse – to not just contain and overwhelm harmful flows of information, but to foster and amplify beneficial flows. Such an ecosystem will not be built in a day, but it is time to start on the path, and adjust as we go.
More on these unbundling proposals and other suggestions for re-architecting social media are on Reisman’s blog.