Dr. Courtney C. Radsch Fellow at the UCLA Institute for Technology, Law & Policy and a member of the Tech Policy Press board.
Amid Russia’s invasion of Ukraine, the faceoff between Russia and U.S. tech firms is escalating in step with tensions on the ground as the battle to control the narrative and public opinion pits internet giants with a professed commitment to free speech against Vladmir Putin, who has mastered the art of using their platforms for propaganda and information warfare.
Consider these recent developments:
- Europe announced it will ban RT and Sputnik, two media outlets controlled by the Russian government, and both Europe and the U.S. imposed sanctions on several people affiliated with state media, including Margarita Simonyan, editor in chief of RT and head of Rossiya Segodnya, the news agency that operates Sputnik and RIA Novosti. The details on what exactly the EU ban means were scant, but it is clearly aimed at handicapping Russia’s propaganda efforts.
- Google (including YouTube), Meta (including Facebook and Instagram) and Twitter have announced measures to deny Russian state media the ability to advertise on their platforms or monetize content, and to reduce amplification of Russian state media content through recommendation and ranking algorithms. At the request of Ukraine’s government, Google restricted access to several Russian state media outlets inside Ukraine. Nick Clegg, Meta’s Vice President of Global Affairs, announced on Twitter that Meta would do the same.
- Meta has resisted efforts by Russia’s internet and media regulator to get content removed even as Russia is throttling access to the social media network and warned that the company faces fines if it does not comply with a new law that requires foreign tech firms to have a local presence. The company said it had stepped up fact-checking amid the propaganda war despite an order by the Russian Ministry of Information to stop independent fact-checking and labeling Russian state media outlets.
- Both Meta and Twitter announced the removal of “covert influence operations” targeting Ukrainian citizens out of Russia and the Russian-controlled Donbas region of Ukraine.
The battles over purported “censorship” on social media are only going to increase amid the escalating hostilities, but once again most of these multibillion-dollar companies are winging it as they go, raising a number of unresolved issues.
How do the platforms handle state-sponsored terrorism and war propaganda?
Will social media companies be tempted – or compelled – to classify Russian propaganda as a form of violent extremism? If so, will that translate into use of a shared database to coordinate the removal of disinformation across dozens of the world’s most popular platforms?
The Global Internet Forum to Counter Terrorism (GIFCT) is an industry-led body founded by Facebook, Twitter and Google whose shared hash database is used by more than a dozen companies to coordinate content removal across the internet at scale. Hashes are digital fingerprints corresponding to content that a platform has removed and can submit to enable other companies to voluntarily remove or filter the corresponding content on their own platforms. The database is only to be used to flag terrorist or violent extremist content, a definition that has evolved since it was started in 2017, and state-sponsored terrorism doesn’t count.
The GIFCT said the taxonomy for what is considered terrorism remains the same – an individual or entity must appear on the UN sanctions list. But there is also a Content Incident Protocol that allows rapid response to livestreaming or video of murder or mass violence. It does not appear that the invasion has triggered a CIP. Given the pressure on tech firms to stave Russian aggression and disinformation, I wonder whether such content could end up being included since we’ve seen the expansion of the database occur in response to major geopolitical events – I’m sure many Ukrainians would consider laudatory videos of Russia’s violent invasion as state-sponsored terrorism.
How will the platforms handle recognition of state accounts if Ukraine’s government falls?
Just last summer the social media platforms found themselves in the midst of geopolitical warfare when the Taliban re-took Afghanistan and expelled the democratically elected government. Not only had the US-designated terrorist group used US social media platforms as part of its efforts to regain power, but it became clear that the companies lacked policies around how to treat official government accounts in such scenarios. My reporting found that of the few companies that had handover policies. Those that did had them only for the US, and such policies only contemplated the peaceful transition of power. If Ukraine falls or its government is in exile, what will happen to state accounts?
Will TikTok move quickly to catch up on its policies and practices?
Some have referred to the invasion of Ukraine as the “TikTok war,” as the site has become a method to mainline short video clips that include a healthy dose of Russian disinformation and fake viral footage alongside legitimate content shared by users on the ground. (One likely fake video purports to show Ukrainians how to operate captured or abandoned Russian tanks).
TikTok does not appear to have a posted policy on state media and did not provide any clarification despite my requests to a top official and the press team for further information. Will this conflict force the company to invest more in content moderation, trust and safety policies, and efforts to address disinformation? Even though platforms such as YouTube, Facebook and Twitter still have a major problem with disinformation, false accounts and other ills of the social media age, TikTok appears to be far behind its competitors.
Will the platforms be required to label more outlets as affiliated with the Russian state?
In 2020 I wrote about the politically fraught efforts to label state-affiliated media content on social media platforms when I was advocacy director at the Committee to Protect Journalists. But while determining which state media outlets are editorially independent is challenging, this is not the case for RT and Sputnik. After the 2016 US election, the extent of Russian information operations was uncovered and RT and Sputnik were compelled to register as foreign agents in the U.S.
In 2018, Google was the first major social media company to roll out labels for state media outlets on YouTube, and it expanded this practice in the following years to include labels for content from state-funded and publicly-funded news outlets. In June 2020, Facebook started labeling media content and ads from “state-controlled” outlets that it determined to be “wholly or partially under the editorial control of their government.” A month later Twitter began labeling the accounts of state-affiliated media and government accounts focused on the five countries that hold permanent positions on the UN Security Council and applied to institutional accounts as well as personal accounts of editors-in-chief, state-affiliated media content, and other foreign policy figures.
There are a range of Russian media entities and personalities that are not currently officially designated by the Department of Justice as foreign agents; if the U.S. decides to designate these entities as under the control of the Russian state, will the platforms follow suit?
– – –
The current crisis is, of course, unfolding after months of Russian threats and years of Russian propaganda that has spawned an entire industry devoted to debunking misinformation and countering disinformation and created a template that scores of countries around the world are following. Now Russia (perhaps taking a page from the Taliban?!) has complained that Facebook and its ilk are censoring it and violating the rights of its citizens.
The irony is yet another aspect of its disinformation prowess. No matter the answer to the above questions, expect Vladimir Putin and his vast disinformation apparatus to label any intervention by Western governments and social media platforms as “censorship” and part of the campaign against him. Of that, there is no question.
Dr. Courtney C. Radsch is a journalist, author and advocate working at the nexus of technology, media and policy. She is a fellow at the UCLA Institute for Technology, Law & Policy and the Center for Media Data and Society (CEU). She has led media assessment and advocacy missions to more than a dozen countries and worked as a journalist in the U.S. and Middle East. She holds a PhD in international relations and is the author of Cyberactivism and Citizen Journalism in Egypt: Digital Dissidence and Political Change. Her commentary and analysis has been published in leading outlets around the world and she serves on the board of Tech Policy Press.