Home

The Algorithmic Management of Misinformation That Protects Liberty

Richard Mackenzie-Gray Scott / Aug 23, 2023

Content moderation algorithms can be designed to reduce the spread of misinformation while protecting the very rights they threaten says Richard Mackenzie-Gray Scott, a postdoctoral researcher at the University of Oxford.

There is sustained sentiment in many democracies that social media platforms could do more to mitigate misinformation, even if its consumption may not necessarily always lead to anything bad. Yet many regulatory approaches risk jeopardizing free speech. Whether it is expanding the grounds for intermediary liability, downgrading or removing content, or deplatforming users, free speech may be compromised by efforts aimed at decreasing the existence and reach of misinformation. But there are measures with potential to both reduce misinformation and protect speech.

What has been overlooked in discussions about counteracting misinformation is the role of another human right: freedom of thought. This freedom helps us to form ideas while interacting with the world. It shapes our decision-making and guides our conduct. And, our freedom of thought influences our freedom of speech. The connection can be considered as an ‘ongoing, cyclic, social process’. Elements of freedom of thought include exposure to and digestion of information, interacting with interlocutors, and reflection on related exchanges, which is why censorship may affect the freedom of thought of actual and potential recipients of information and ideas.

Similarly, for speech to be free, the thinking that precedes it requires cognitive liberty. Despite the uncertainties regarding the relationship between belief and behavior, providing opportunities that encourage individuals to think freely may decrease the volume of reactive speech during social media interactions. Platform design stimulates reactive behavior, in part by enabling users to express themselves quickly. Interrupting this tendency is important, including online where information constantly bombards our brains and content curation may create insulation from unfamiliar ideas. A recent study of Facebook found "users are much more likely to see content from like-minded sources than they are to see content from cross-cutting sources." Another recent study shows that this platform is also ideologically segregated. Such factors can lead to people not recognizing alternative perspectives, where they become anchored by their ideas and opinions – sometimes conflating them with principles and values.

Nourishing freedom of thought and, by extension, free speech, depends on exposure to diverse sources of information. By exposing someone to different forms and substances of expression on a particular topic, they are provided the opportunity to think more fully about it than another person exposed to fewer perspectives, especially if provided time for reflection. This approach to information dissemination and consumption also brings with it the possibility of helping individuals be more receptive in their interactions, where there is a willingness to pay attention to viewpoints with which they disagree. In turn, the quality of communication can improve, perhaps even become convivial. Consuming different sources of information provides distinct filtered realities of our shared reality. If individuals constantly observe information through only one filter, it is understandable when they do not believe information that emerges from other filters.

Access to various perspectives on an issue promotes the ability to think across any nuances and variables at play while forming an idea or opinion. Without consideration of alternative views, and prompts to do so, people can arrive at firm conclusions too quickly, sometimes expressing them with so much certainty that new information is ignored, drowned out in a deluge of bias, overconfidence, or their destructive combination. Countering such behaviour forms part of dispelling misinformation. And there just so happens to be a tool that social media platforms could be using differently to reduce the spread of misinformation while operationalizing freedom of thought.

The tool comes in the form of a digital nudge. Although there are a number of general caveats of which to be mindful when considering nudge-featuring policies, the specific type of nudge and how it is designed matters. Recent research draws further attention to a form of digital nudging with the potential to stimulate freedom of thought in social media users – not attempting to think for them. Its deployment consists of relying on an algorithm designed to display alternative sources of information to users in an interstitial pop-up should they click to share content containing misinformation.

This mechanism is similar to information panels that provide further context on a topic, like those used on Facebook, Twitter (now X), and YouTube. Key differences include more friction being present in the user interface experience combined with the presentation of third party content that exists externally from the platform. Should a user go to share content containing misinformation, interrupting this engagement provides a moment for pause and the opportunity to consume related, but different, content. This chance for reflection and encouraging doubt, however minimal, could deter the user from sharing the misinformation any further.

But unlike fact-check alerts and associated information panels, digital nudges intended to encourage consumption of a more diverse corpus of information would not label any content in terms of its relationship to truth (for example, ‘false’, ‘misleading’, ‘disputed’). Users could therefore become less dependent on decisional support from platforms deciding for them what information should and should not be lent credence. It is concerning that social media platforms have become arbiters of truth, and even more so if users are reliant on them continuing in this role. For the flourishing of freedom of thought, people require agency to arrive at informed conclusions themselves after navigating available information.

Of course, the navigability of the online information environment is difficult. Not only because of informational volume, but also because of platform designs that are effective at catching and holding attention. Information people receive online can be tailored to precision based on their data profile. These factors become acute on platforms that function to generate more quantitative user-engagement in order to increase profits based on the extraction and exploitation of related data. The digital nudging described above works with this logic. The more time a user spends on a particular platform, the more that platform knows about them, including what sources they likely trust. And trust is key, because of its link to users’ personal identities and affinities, which shape, and are shaped by, the ‘narratives and networks’ of their belief systems. A user may trust one source of information but not another, even if both communicate similar messages. Related user data can thus inform what sources are likely to elicit the highest levels of trust from each user, which a content moderation algorithm could incorporate when presenting alternative sources to users that interact with misinformation.

This approach offers a corrective to online platforms that limit user exposure to different viewpoints from which they are familiar. Features of algorithmic curation on social media, combined with how humans process and respond to information, challenge the marketplace of ideas metaphor, where ideas are assumed to compete, and good ones are supposedly accepted and bad rejected. But buying into these assumptions is partly why misinformation can attain outsized reach. Ideas can only compete if they are known. Different perspectives during user-engagement cannot be considered if users are not aware of them in the first place. Managing misinformation requires multipronged responses that include providing alternative explanations to account for content that is plausible but ultimately false.

Accurate information needs help to compete in online marketplaces of attention. Although social media usage, depending on the platform, may break echo chambers, accurate information, no matter the amount, is not capable of reducing concentrations of misinformation within the feed of a social media user unless the accurate information permeates that feed. One delivery method that can provide such a transfer is alternative source digital nudging. With algorithms supplying diverse options prompting users to consume more than one initial source, there is an attempt to interrupt our default, unconscious thought process, even if briefly. This window of friction may be enough to forestall users from sharing content containing misinformation. Deploying the related algorithm is thus geared towards prompting conscious thought in users. It carries a promise aiming to promote rational thinking by offering the opportunity to take into consideration other sources of information before making choices regarding what to share on social media.

Related algorithms would be designed to function so that new information is automatically presented to a user when they interact with sources containing misinformation, in a way attuned to their affectivity. Using alternative information as a component of counteracting misinformation needs to account for how it makes people feel in addition to what they think. As Jon Meacham wrote: ‘The ability to apply what one thought in order to shape how one felt, however, [is] another, more difficult thing’. It is the combination of thoughts and feelings that drive human will. Misinformation mitigation strategies need to factor in this mix if they have a hope of being effective.

Another consideration is when to use this measure. One proposal is for a platform to deploy the related algorithm when there are spikes in misinformation on that platform. While the algorithm could operate continually, usage could be limited to periods where circumstances necessitate due to what is at stake during a particular influx of misinformation. As such, platforms might consider utilizing the algorithm in periods surrounding elections, as well as during conflicts, disasters, and pandemics, and after attacks that may be wrongly ascribed to a particular group. A potential difficulty is that if it appears as if platforms are nudging users towards pre-selected information during times when doing so is politically salient, then this approach could contribute to distrust.

That said, limiting deployment of the algorithm to periods where there are influxes of misinformation on a particular platform mitigates the potential risk of guiding users to more readily accepting predetermined viewpoints over time. Prolonged exposure to alternative source digital nudging could ultimately narrow user exposure to sources of information if not conducted carefully. For instance, if the algorithm has been designed to present alternative sources that happen to fall within only one partisan persuasion, then it could ultimately bias users in favour of positions based on that political doctrine. This apprehension mainly depends on the selection of sources to present as alternatives: who is selecting them, how they are selected, and who decides who will make these decisions (and in what way).

The related risk is users who respond positively to alternative sources could end up closing the feedback loop informing the algorithm, thereby ushering users closer towards previously agreed-upon positions on particular subjects. The possibility of such regression in terms of access to diverse content and therefore hindrance on freethinking would be contrary to the intention behind alternative source digital nudging. Nonetheless, intentions do not necessarily transpire into outcomes. Guarding against over-reliance on digital nudging and the risks it brings may mean the best option is any version of this measure being phased out alongside reductions in misinformation on a particular platform, where re-introduction would be contingent on there being further spikes.

A further balancing act for platforms is to design algorithms and user interfaces so that enough friction is created to reduce misinformation sharing, but without reaching the threshold of being so recurrent that users become frustrated with their presence. This possibility could decrease user engagement on the applicable platform. People might also leave it, whether permanently or temporarily, and potentially for a competitor. Particularly if that platform has less rigorous content moderation, such abandonment may do little if anything to stem the flow of misinformation, and could even result in its increase.

While there are advantages to utilizing digital design of the sort deployed via algorithms, digital nudging on social media newsfeeds as a measure to manage misinformation needs further public appraisal. Social media platforms shape communication today and thus shape the associated human rights of speech and thought. This power requires shepherding by public interests. Open debate is crucial. Whatever way the scales ultimately tip, the related contestations would do well to occur in public forums so as to provide for adequate scrutiny. Nourishing deliberative democracy on this matter means people providing their input and being heard. Before digital nudges are implemented further, the public deserves chances to share its thoughts and have them acted upon by representatives accountable for their decisions. Governments and companies are responsible for providing opportunities for people to become participants in such algorithmic governance, not merely subject to it.

With the Social Media Nudge Bill before Congress, there is a chance for the regulation of online spaces to develop in alignment with public preferences. A component of this legislation is that, if enacted, it will require online platforms to implement and measure the impact of ‘content-agnostic interventions’. Alternative source digital nudges can be content-agnostic if designed with care. This measure presents an opportunity for social media users to consume different sources of information, including those that may never ordinarily appear to them online because of algorithmic curation. Sharing data publicly on the effectiveness of digital nudges would also help inform related practice. Given the heterogeneity in the research on nudging and its results, providing such data would eventually confirm or refute whether and what versions of digital nudges, if any, reduce the spread of misinformation online.

Powerholders can strike a better balance between human rights and community interests when addressing misinformation and its circulation in online spaces. Digital technology can help here if used with diligence. All this said, we must not get lost in the allure of technological measures promising quick fixes to complex social problems. Sometimes such measures are simply performative, adopted to create an appearance that an issue is being dealt with, instead of it actually being appropriately addressed. Misinformation on social media may be compounded by technical problems, such as platform algorithms optimizing for quantitative increases in user-engagement. But this problem is ultimately sociopsychological. Variations of the alternative source digital nudge may therefore come to form part of reducing the amplification of misinformation in terms of its reach and impact. Yet even though this measure is also capable of promoting freedom of thought and protecting free speech, any version of it is, at best, an accompaniment to initiatives that treat the underlying causes of misinformation.

- - -

This article is based on research that received funding from the British Academy (grant no. BAR00550-BA00.01). The author thanks Bethany Shiner, Daniele Nunes, Halefom Abraha, Kate O’Regan, and Six Silberman for their helpful feedback on the original draft.

Authors

Richard Mackenzie-Gray Scott
Dr. Richard Mackenzie-Gray Scott is Postdoctoral Fellow at the Bonavero Institute of Human Rights and St Antony’s College, University of Oxford, and Visiting Professor at the Center for Technology and Society, Getulio Vargas Foundation. He is the author of State Responsibility for Non-State Actors: ...

Topics