How ‘Net Hypnosis’ Distorts Our View of Democracy’s Digital Defense
Chris Zappone / Sep 13, 2024Chris Zappone wrote and narrated Dark Shining Moment (a podcast by Ranieri & Co). He works in Australian media.
American leaders’ fixation with maps during the Cold War helped blind them to complex realities on the ground that would later haunt the US. A phenomenon called Cartohypnosis – a kind of obsession with making decisions based on the logic of the map – shaped the containment strategy towards the Soviets, framed the Domino Theory, and invited an ignorance of geographical realities that would lead to the tragedy of the Vietnam War.
The logic of a global rivalry conceived as colored territory on maps blotted out the culture, history and nuance that could explain the local affinities and motivations that would be all-important in understanding insurgencies and civil wars of the second half of the 20th Century. Today, the ad hoc apparatus formed to fight disinformation runs the risk of creating a structural blindness as damaging as cartohypnosis was.
The dangers of ‘net hypnosis’
Iranian propaganda around Israel’s war against Hamas. Russian propaganda targeting Western politics around the invasion of Ukraine. Chinese propaganda converging on topics in the Indo-Pacific. Even when researchers find the suspect content, the troubled logic of a "net hypnosis" assumes the possibility of a state in which networks are free of this “bad” information. Chasing after this outcome – even without acknowledging it – could prevent a more systematic defense or counter-strategy against disinformation from taking root. Too often, removing “bad” information just makes room for more such information.
Rather than reckoning with the need for a more comprehensive approach and then working to build a more liberal world, researchers have set themselves the Sisyphean task of rooting out “bad” information on the web.
The US Justice Department’s indictment last week of Russia-based RT employees Kostiantyn Kalashnikov and Elena Afanasyeva in a $10 million scheme to influence US politics, should give us pause to reconsider what has become a defacto standard to fighting disinformation.
In addition to employees of RT charged with remotely editing and guiding content production by rightwing influencers in the US, the FBI revealed a systematic Russian operation to shift European attitudes about politics, the US and the Kremlin’s war on Ukraine. This led to the US seizing more than 30 internet domains aimed at influencing the American election.
A moment to pause and reconsider tactics in the fight against disinformation
The extensive nature of the Russian effort should give defenders of liberalism a moment to consider whether a preoccupation with policing networks is actually an effective approach.
The fixation, if not outright fascination, with disinformation can be traced to the aftermath of the US 2016 election. In the febrile months after November 2016, disinformation researchers ingested the data made available from the platforms to understand when, how and with whom the Russian Internet Research Agency (IRA) interacted.
From there, a certain view of disinformation and subversive communication emerged: network-focused, forensically traceable, and somewhat binary. For media outlets, and governments, the IRA was tangible, locatable, reportable. The IRA even had a street address in St. Petersburg.
The journalistic certainty of the IRA’s campaign may have prevented a truer appreciation of the fleeting nature of weaponized information.
A more complete view of 2016, like I’ve tried to depict in the podcast series Dark Shining Moment, underscores how Russia’s interference crossed into subversive activity such as rallies and bogus demonstrations, events the effects of which are notoriously hard, if not impossible, to measure and judge. The variety of effects happening contemporaneously is visible also in the details revealed by the DOJ indictments and statements from the Department of Treasury.
As the 2016 campaign involved a mobilization of mass trolls, the recent DOJ indictment revealed the emphasis of Kremlin-directed campaigns that relied on truthful information. Documents translated from Russian and released in the 277-page FBI indictment describe the planned use of "sleeping" communities aimed at six swing states, designed to attract an audience through targeted ads, planted stories, and organic growth. "At the right moment, 'upon gaining momentum,’ these communities become an important instrument of influencing the public opinion in critically important states and portals used by the Russian side to distribute bogus stories disguised as newsworthy events," according to the authors.
From a network perspective, one would need to wait until false information is published for it to be debunked and ultimately deplatformed. A reactive response to a flow of information that is clearly in motion and accelerating in speed, as was the case in 2016.
Researchers’ fixation on disinformation
The nature of the “disinformation” seen at scale in 2016 and allegedly planned in 2023-4 eludes many network-dependent definitions of “disinformation.” But perhaps, the discipline of disinformation research, shaped by the chaotic aftermath of Russia’s intervention in the 2016 election, leaned too heavily on cybersecurity concepts: searching out bad information and bad actors on networks.
A bibliometric review of disinformation research from 2002 to 2021 showed the number of papers "related to disinformation has increased exponentially year by year" from about 200 a year in 2015 to 1800 a year by 2021.
A keyword analysis of the same set of 5666 papers shows the top term is "misinformation" followed by "social media." The surge in the research area appears to be bound up with the emergence of social media platforms and their effect.
In that time, the “network” view of mis- or disinformation came to prevail, even as the scope and possibility of malign foreign influence – as both 2016 and 2024 incidents show — frequently takes place at a level not detectable purely on networks. US influencers paid to produce content, Western politicians who reliable repeat Kremlin propaganda, legitimate news that is strategically showcased.
Yet the unending fixation on content on networks, while serving the needs of research and content analysis, fails to assess how the democratic mind responds to the galaxy of variables around it.
In the process, disinformation research runs parallel or even departs from the broader goal of assuring strategic disinformation can’t derail democracy.
Net hypnosis distracts from real solutions
Networked technology allows a level of forensics that itself offers a false sense of understanding. The graphs, the charts, visualization of disinformation, make the vague and fleeting nature of ever-mutable public knowledge more tangible, more legible.
The events of 2016 – the mass mobilization online, the willful confusion of political topics with personality on gameable networks, the hacks and leaks, the faked or disrupted rallies on the ground – catalyzed parts of academia into thinking this way.
But the US government’s 2024 seizures of 32 internet domains used by Russia as part of influence effort suggests that democracies should be ready for more shocks that defy easy classification. Just one segment of the Russian influence campaign divulged by the DOJ showed multiple, staged efforts targeting audiences as varied as red-staters, people in swing-states, American Jews, Hispanic Americans and gamers found on 4chan.
Even if traceable on networks, researchers mapping such accounts run the risk of tracking the patterns of a hurricane, while ignoring the effect of the storm bearing down on democracy. If some experts in the research community pivot away from the measurable network aspect of disinformation they could, instead, examine the informational conditions conducive to liberalism – including strong local journalism and networks of trusted civic institutions empowered to communicate on 21st century platforms.
Georgetown University professor Joshua Cherniss has described liberalism as less a set of policies than a disposition: it has an openness to complexity, with its adherents showing willingness to acknowledge uncertainty and have a tolerance for difference.
Yet, a notable feature of the 21st Century may be that an overabundance of network complexity itself is a risk. Too often, the research products of network hypnosis unnecessarily complicate our understanding of the information space.
To that end, perhaps the goal of researchers shouldn’t be to “solve” ever-abundant disinformation, but to ensure the uninterrupted flow of honest, non-coercive, non-bamboozling, sense-making information, the kind that allows liberalism to flourish.
Eight years after 2016, an over-simple assumption in how to defend democracy risks becoming entrenched. For all the talk of disinformation, conspiracy theories, QAnon, and now-AI driven disinformation campaigns, we mustn’t lose sight of the ultimate aim that motivates much disinformation research. That is for liberal democracy to be able to succeed in the networked era.