It is rare that the enthusiasm of technologists for open protocols and interoperability aligns with the interests of scholars that publish in the Journal of Democracy. But in the last few months, those concerned about the role of behemoth technology firms in shaping the information ecosystem have advanced an important dialogue about the future of social media. Interest is growing in a set of ideas that regard the scale of the current platforms as dangerous to democracy and propose that one way to address the danger is to “unbundle” social media into distinct functions to separate the basic network layer of the content from the higher level of content curation.
These ideas offer an alternative vision of what social media can be and how its functions can be delivered in a way that is congruent with democracy. As an early proponent of these ideas, I have summarized recent scholarly contributions to this dialogue, building on a longer commentary on these ideas I published earlier in Tech Policy Press. But applying such a radical rethink of the social media ecosystem will require thinkers from a variety of disciplines to work together on a range of issues.
Synthesizing the debate with respect to the key subject areas proposed by Daphne Keller, Director of the Program on Platform Regulation at Stanford University’s Cyber Policy Center, here are five that must be addressed for any solution to succeed:
Protecting freedom of speech and expression while limiting harm is a prime argument for unbundling control of content curation from today’s platforms. Content curation by a dominant platform, even if regulated by governments or a pseudo-independent authority like the Facebook Oversight Board, is inherently authoritarian. Unbundling content curation and algorithmic filtering away from the network layer of social media may not directly do much to control the harms of abusive speech. But done well, it might decouple the perverse incentives of ad models that drive engagement rather than value, and permit a reduction in the scale of today’s content curation regimes. It would also remedy the inherently undemocratic nature of “platform law.”
Whether such a decoupling would produce more filter bubbles or echo chambers is an open question. But there seems a strong case that fragmenting curation and algorithmic filtering might contain the worst harms of disinformation, harassment and other negative phenomena that seem to emerge as a function of the current model. Study and experimentation may clarify how best to reduce those concerns using the other means that some of the debating authors have proposed- without unacceptably restricting speech rights.
- Business models.
Most of the voices engaged in this debate view the advertising business model of today’s social media platforms as a central issue and current driver of harm. My own work had proposed shifting away from perverse incentives of advertising as a potential remedy, based on innovations in consumer-based revenue. Unbundling may open up more opportunities to shift the revenue model for social media away from advertising.
But simply introducing some mechanism to share ad revenue to fund filtering services- as Francis Fukuyama suggests– could provide at least a partial decoupling, much as the traditional separation of publisher and editorial functions does in traditional news media businesses. If there are a multitude of filtering services, and advertising revenue shares are tied to some metric other than the engagement contribution of a given filtering service (such as monthly active users), the motivation to filter for engagement would be very much diluted. Absent the unbundling of filtering, shifting from targeted to contextual ads (as Natalie Maréchal prioritizes) would potentially reduce data and privacy abuses, but it is not clear that it would significantly reduce incentives to filter for engagement rather than for user value, nor would it remedy the inherently undemocratic nature of platform law. These alternative models will need to be tested.
In her consideration of these ideas, Marechal gives primacy to the abuses of “surveillance capitalism.” I suggest this concern be narrowed to focus more specifically on the harms of “attention capitalism.” In this context, that means requiring that users be enabled to opt in (or not) and be compensated for their data and attention at rates they accept (another matter of business model innovation that I have proposed). More broadly, that means focusing controls on the uses of data in a way that does not preclude realizing the social value to be obtained from personal data when properly used.
The right to privacy and the ownership of all facts relating to us is contingent and has limits. Cory Doctorow provides excellent analysis in the context of social media and our “epistemological crisis,” and Stefaan Verhulst explains how data is a non-rivalrous public good that cannot be owned and gains value from sharing. Consumer rights relate to how data about ourselves is used. Fukuyama’s group seems to have in mind less access to personal data, but my view is that these services need to at least have access to the key metadata on information sharing flows. That metadata is as important as the content itself — much as law enforcement often uses telephonic metadata on calls without access to the content of those calls. The metadata can be enough to be able to infer the value of content items- much as Google’s algorithms use metadata to infer from implicit human judgments of Web page quality.
- Competition and interoperability.
Competition and innovation are central to Fukuyama’s arguments. Of course, both can be poorly regulated, allowing abuses, harmful externalities, and recklessly moving fast and breaking things, as Marechal alludes to. The Fukuyama group points to the need for a specialized regulatory agency to manage the complex and evolving issues here, as do I.
There seems to be broad agreement that interoperability is desirable, but this can be done at many levels in many ways, and the devil will be in the details. This is where an expert regulatory agency is likely most needed — to ensure a desirable level of openness. Interoperability is best understood in the context of the broader proposals for modularizing social media and deconstructing current platforms, such as by Jack Dorsey, Stephen Wolfram, Mike Masnick, Cory Doctorow, and Ethan Zuckerman, among others. There are strong arguments for a broader functional modularization and interoperability to maximize consumer choice and to unleash innovation in this still embryonic technology and how society applies it.
- Technological feasibility.
Is all of this possible? The technology itself is well within reach- and many equally complex interoperable systems have been deployed in recent decades in such demanding and dynamic fields as fintech and, closest to home, the Internet search/advertising/commerce complex. Hated as it may be, the auctioning, serving, and tracking of Web advertising is a marvel of highly dynamic and extensible interoperability with strict real-time constraints.
Specifics will depend on where we want the functional separations to be (each of the proponents has slightly different ideas and Fukuyama described his group’s as a moving target), with what technical approach (middleware APIs, protocols, and adversarial interoperability among others). I see the core function of filtering services as upranking and downranking candidate items — and doing that in such a way that multiple user-selected filters (and perhaps some mandated filters) can be combined to yield a composite ranking that is used to produce each user’s feed.
A critical aspect of technological feasibility relates to content curation and its cost. Current methods of curation are hopelessly labor intensive – with the impossible burden of serving billions of masters in hundreds of countries at once. But the strategies I have detailed elsewhere promise to work much like Web search relevance filters- to serve in real time as a cognitive immune system. These strategies draw on and augment crowdsourced wisdom, which has been shown to be nearly as effective as experts in judging the quality of social media content.
In conclusion, this debate makes it clear there are no easy solutions to the problem of managing online discourse in a democracy, but does suggest a path forward. Our mismanaged start down the recklessly exploitive path that has produced today’s social media industry has made this a crisis, muddied the waters, and damaged our ability to collaborate on a solution. It is time to rank this as a “crisis discipline” and begin a whole-of-society attack on these problems. More people need to join this urgent dialogue to reinvent a social media industry that advances democracy.
Running updates and additional commentary on these important issues can be found on Reisman’s blog.
Richard Reisman is an independent media-tech innovator and frequent contributor to Tech Policy Press. He has managed and consulted for businesses of all sizes, developed pioneering online services, and holds over 50 media-tech patents licensed by over 200 companies to serve billions of users. He blogs on human-centered digital services and related tech policy at SmartlyIntertwingled.com. His book, FairPay: Adaptively Win-Win Customer Relationships, and related blog, FairPayZone.com, introduce new customer-value-first revenue strategies for digital services that were described in Harvard Business Review.