Théophile Lenoir is a PhD student at the University of Leeds.
On May 25th, 2022 in Paris, a group of notable media and communications scholars met at Sciences Po for a pre-conference before the annual International Communications Association (ICA) event. The titular prompt for the pre-conference was a question that has preoccupied researchers for some time now: “What comes after disinformation studies?” (The day’s program can be found here).
The pre-conference was prompted by concern among these scholars about the limits of what they regard as “the disinformation narrative,” which portrays information ecosystems, democracy and disinformation mostly through a liberal world-view. The pre-conference aimed at reframing the field of disinformation studies by identifying “the importance of historical contextual and geopolitical approaches” for understanding the relationships between truth, power and politics.
This article is an effort to share some of the arguments and conceptual frameworks discussed during the pre-conference. Translating academic language is always challenging, and some of the points below are considerable simplifications of more nuanced ideas. I hope the scholars present on May 25th will still recognize their views. My aim here is to help policymakers and others outside academia explore new avenues of reflection for thinking about how best to consider the problem of disinformation.
Countering the Consensus on Disinformation
Before thinking about what comes after disinformation studies, let’s first quickly rewind and think about the growth of the field over the last few years. The Brexit referendum in 2015 and the election of Donald Trump in 2016 made disinformation a hot topic of research and policy concern. With a new wave of public and foundation funding, scientists opened research centers, think tanks designed new programs and fellowships, governments built independent agencies, and legislatures developed new policies and regulations to mitigate the threat. This myriad of actors (including myself) were mobilized by a deep concern about the impact of disinformation on democracy.
Their concerns were warranted, considering disinformation has contributed to riots, challenges to election outcomes, public health disasters, and even assassinations. But to many, perhaps the most daunting consequence of all was the election of Donald Trump, which presented a fundamental threat to the liberal world order. The role of disinformation served as one particularly compelling justification for the Democrats’ failure to win the 2016 election. Trump’s victory, which followed a campaign in which there was evidence of foreign interference and a substantial disinformation campaign targeting his opponent, was portrayed as the result of a flawed information system that gives poor information to citizens and prevents them from making rational decisions. For stunned elites, this could be the only explanation; otherwise, why would American citizens have elected an authoritarian charlatan like Donald Trump?
Under this “rational model,” disinformation is a fundamental threat to democracy itself, since it challenges the way democracy works. The assumption is that a sound democracy involves citizens making rational decisions based on quality information. So what happens when disinformation circulates online? How can citizens make up their minds on political issues if the information on which they base their arguments is false?
Often, those who assert the rational model harken back to mid-twentieth century norms and assumptions about how the information ecosystem should work. Truth used to surround us: in the press, in government communications, and on television. But in the social media age, with gatekeepers displaced, we’ve lost it. And that is scary, because how else can we ensure rational politics? How could we ever live in a world in which people disagree about the fundamentals of what happens around them?
Considering the limits of this approach was the objective of the preconference. Throughout the day, I heard a range of ideas that can be more or less summarized into three points. The first is a consideration of which constituencies are invested in the disinformation narrative in the U.S. The second is to do with how disinformation is framed outside the West. And the third draws from the two previous points to suggest a way to think differently about what democracy is, and what role information plays in it. I then suggest some considerations for regulators.
Who’s Concerned about Disinformation in the U.S.?
The first reason to question the underlying tenets of the fight against disinformation is that not everyone sees disinformation as equally problematic. In the United States, the organizations that undertook this fight are predominantly liberal: philanthropies such as the Ford Foundation and the Knight Foundation, research centers at Stanford and Harvard, and think tanks such as the Aspen Institute and the Atlantic Council all share a generally liberal view of the world. In Harper’s, journalist Joe Bernstein referred to a similar constellation of actors as “Big Disinfo.” This tells a lot about what is at stake in the way these organizations conceptualize the problem of disinformation.
After eight years of a liberal presidency comes a candidate who makes a habit of lying on the one hand, and who criticizes existing knowledge-producing institutions on the other hand. Taken aback and weakened, liberals were quick to make the link between the two: false information is a weapon against existing institutions. In a sense, former President Trump brought to light a raw political fact: truth is power. Politics is about describing the world to justify actions. If you manage to describe the world in a way that justifies your actions, you have more power.
The asymmetry in who cares about disinformation has led to specific stereotypes and metaphors that are largely shared and agreed upon in the field. These are, for example, the ‘information warfare’ or the ‘infodemic’ narratives. The issue with these narratives, according to some of the thinkers gathered in Paris, is that they often frame large parts of the population as either enemies or irrational beings that can be easily manipulated. This is somewhat problematic, since the underlying argument for why disinformation is an issue in the first place is that it challenges the shared fabric of society. Paradoxically, by discrediting the basis of their opponent’s arguments as being simply false, the defenders of the disinformation narrative also fail to work towards a shared understanding of the world.
The first lesson for the field of disinformation studies is therefore to take what could be called a symmetric approach to disinformation: working towards a better understanding of what disinformation represents across the political spectrum and formulating the problem in a manner that assumes all citizens are rational and irrational, and equally driven by identitarian interests.
Thinking Outside the West: Disinformation in the Global South
A second observation scholars in Paris discussed is that the current consensus frames disinformation as a Western phenomenon. This does not mean that disinformation researchers haven’t looked at disinformation operations taking place in other parts of the world: there is extensive research on disinformation in Brazil, in Nigeria, in Myanmar and elsewhere. Rather, it hints at the fact that, when looking at disinformation in other parts of the world, it is easy to frame it in terms that make sense for Western policy-makers and researchers.
In many places where liberal democracy is not the prevailing political system, the notion of disinformation itself is opaque. When the press works hand in hand with private companies or governments to share carefully narrated news, doesn’t the very notion of disinformation lose substance? Information infrastructures have always shaped the news, both in terms of form and content. In today’s world, Facebook shapes media coverage and political communication, and indirectly exports the concerns over disinformation to other parts of the world, shaping the way we speak of truths and lies.
Western social media platforms portray information ecosystems everywhere as free information markets in which citizens navigate content as rational agents to make informed judgments. This prevents research from considering how the notions of truths and falsehoods can be perceived in other political systems, how they are weaponized and how they impact people. “If all you have is a hammer, everything looks like a nail”: if your information system is run by companies like Facebook, every falsehood looks like a negative externality of a flawed information infrastructure.
This raises deeper questions about how platforms participate in shaping the relationship between the West and other parts of the world. The example of the war in Ukraine is enlightening: some countries outside the West have had trouble adhering to the U.S. condemnation of Russia’s information tactics as disinformation. Protesting against the disinformation narrative is also a form of protest against the U.S. in countries that have had to live with the West’s expansion strategies. How did they perceive Western media narratives around military operations such as the invasion of Iraq?
The second lesson is therefore similar to the first one but at a different scale: moving the field of disinformation studies further requires looking at disinformation from outside the West, which often means considering what notions of true and false are perceived differently in political systems that are not liberal democracies.
Democracy: Conflicts that Never End?
Both previous points bring to the fore perhaps the most important critique of disinformation studies. When the underlying assumption is that disinformation is a threat to democracy, what is “democracy”?
Depicting democracy as a process of individual, rational evaluations of self-interest leads to seeing politics as a series of causes and consequences: citizens and politicians make decisions using an information infrastructure which most often works, but sometimes fails. In this line of thought, Facebook introduces deficiencies into the larger machinery of the public sphere that has made it possible for personalities such as Trump to get to power.
This vision merges technology and politics so much that, when offering solutions, it is difficult, if not impossible, to differentiate technology from politics. Policymakers have tried to save democracy from disinformation in the hope that it would protect Western societies from the threat of populism. Interestingly, the solutions that emerge from this way of thinking are often technical and have to do with monitoring recommendation algorithms, sharing data or building rapid alert systems. Will this make racism, populism, discontent, polarization and anger all go away? It still seems this is the hopeful expectation of some of the disinformation narrative’s proponents.
Conflicts are an integral part of democracy. Yet the Western liberal fight against disinformation can easily be interpreted as an enterprise to make conflicts disappear. What is expected to happen once the big information machinery is fixed and disinformation is kept to a minimum? The results might prove to be disappointing if they are evaluated on the ability to ease tensions and prevent antagonism.
The third lesson, then, is to think about how to live in a world with disinformation. Historical approaches to disinformation show that falsehoods have always been a part of democracy. So they will most likely not go away. At what threshold can we start focusing again on politics and less on the ‘information ecosystem’? This lesson is more theoretical. It is to try to care more about politics and less about technology. It involves depicting the public sphere as something other than a vast media machinery with parts that can be tweaked to arrive at optimal outcomes.
Thoughts for Regulators
The arguments above could be interpreted as justifying a lax attitude towards truth in public debates, or that governments, universities and foundations should stand down in their efforts to combat disinformation. How productive are these critiques when American citizens are being targeted with false information to prevent them from voting, particularly Black people, who are historically disenfranchised? When fake massacres are being simulated by Russian militaries to tarnish the reputation of the French army in Mali? How useful are these ideas to the activists, NGOs, policy-makers and state organizations who work to protect the rights and security of their fellow-citizens?
A lot of credit must be given to those who have worked on tackling disinformation. Platforms have been forced to make significant changes that have real impact at scale. Civil society is more prepared to confront disinformation, particularly at key moments such as during elections. And in Europe, the Digital Services Act was agreed upon, with provisions to open tech platforms to public scrutiny. The legislation manages to deal with illegal as well as problematic content, encouraging platforms and politicians to think of disinformation’s gray areas. This is a very significant step in holding large platforms accountable and offering technical solutions to fix the information ecosystem.
But the scholars in the room in Paris show that such solutions will have limited effects. Since truth is power, any truth serves someone’s interests. Increasingly, it has become more important to identify whose interests are being represented by a statement than to find out whether this statement is true or false. As was pointed out in the pre-conference opening remarks, “we can’t fact check our way out of global regimes of white supremacy and racial hierarchies in their myriad of forms.” This is because the limits of the disinformation narrative are perceived well beyond academia: something doesn’t quite work with the naïve defense of truth.
Liberal democracies face strong opposition from people attentive to these limits. This brings perhaps the argument that the scholars gathered in Paris to conceive of a field of post-disinformation studies will help spread: fact-checking our way out of politics will not work. Technical solutions to political problems are bound to fail. The challenge for liberal institutions and leaders is to engage in politics by framing narratives that make people want to live in the world they seek to build.
Théophile Lenoir is a PhD student at the University of Leeds. Previously, he was Head of the Digital Program at Institut Montaigne, where from 2017 to 2021 he developed the think tank’s research on digital issues. He is the co-author of Institut Montaigne’s note Information Manipulations Around Covid-19: France Under Attack (July 2020), and coordinated the production of various reports, including Media Polarization “à la française”? Comparing the French and American ecosystems (June 2019). He also worked for Reset Tech and was one of the lead authors of The French Information Ecosystem Put to the Test (June 2022), a report by the French Online Election Integrity Watch Group on disinformation during the French election. He is a graduate of the London School of Economics and the USC Annenberg School for Communication and Journalism.