Home

War Propaganda and International Law: A Conversation with Vivek Krishnamurthy

Justin Hendrix / Mar 18, 2022

Audio of this conversation is available via your favorite podcast service.

Governments and tech platforms have moved quickly to take action against Russian state media since the invasion of Ukraine on February 24. But what frameworks exist in international law that could inform our thinking about these complicated questions at the intersection of speech and human rights?

To answer that question, I spoke to Vivek Krishnamurthy. the Samuelson-Glushko Professor of Law at the University of Ottawa and Director of the Samuelson-Glushko Canadian Internet Policy and Public Interest Clinic (CIPPIC). Vivek is currently a Fellow at the Carr Center for Human Rights Policy at the Harvard Kennedy School, a Faculty Associate of the Berkman Klein Center for Internet & Society at Harvard University, and a Senior Associate of the Human Rights Initiative at the Center for Strategic and International Studies in Washington, D.C.

What follows is a lightly edited transcript of our discussion.

Justin Hendrix:

So with the invasion of Ukraine by Russia, you have a piece that came out on the Center for International Governance Innovation's website with the headline, Putin's Illegal War Has Gotten an Easy Ride from Big Tech: Are Russia's RT and Sputnik spreading war propaganda? Restrictions are already in place and arguably apply here. So what part of international law were you referencing in this piece?

Vivek Krishnamurthy:

Sure. So the International Covenant on Civil and Political Rights, which goes by the abbreviation ICCPR, is perhaps the most important international human rights treaty or certainly one of the three most important. This was negotiated back in the 1960s, and it's a treaty that has very wide acceptance around the world.

Article 20 of this treaty basically states that any propaganda for war shall be prohibited by law. So it struck me as we were having this debate about RT, Sputnik, other Russian state media, and also Russian government activity on social media of various kinds, is there a way that we can use this existing international legal prohibition to structure the response of companies to this content and also ultimately of government?

So as often happens in content moderation, other tech policy debates, right, something happens in the world. People are looking for ways to respond to it. And often the decision making is a little ad hoc. People think about it from an ethical or a moral perspective, which is certainly one way of thinking about things and I wouldn't discourage that, but oftentimes we can reach into the law and find legal instruments that can guide that decision making. So really the aim of the piece was to make clear that this provision exists, that it's potentially relevant and to try to help it apply it to this situation.

Justin Hendrix:

So tell me a little bit more about this particular law. What was the ICCPR trying to do?

Vivek Krishnamurthy:

So to really explain this, we need to dive into a little bit of history. I haven't done the primary historical research myself. There's a great scholar at the University of Sussex, named Michael Kearney who wrote this book about the prohibition of propaganda and international law about 15 years ago. And in the book he laments how this provision has basically been ignored. It just hasn't gotten much attention from scholars or anyone.

But his book, which is a fascinating read, I highly recommend it to anyone with an interest in these topics right now, chronicles the origin of this provision. At the end of the Second World War, there was this recognition at the end of the Second World War that propaganda played a very significant part in unleashing the Second World War. And during the Nuremberg and Tokyo Trials there were several prosecutions of propagandists that were brought and those prosecutions really focused on a couple of things.

On the fact that these propagandists had created the conditions for an aggressive war to occur by whipping up their populace to be ready to support aggressive acts by the Axis forces in World War II. But, part of the charges were also about the suppression of legitimate and free information in Nazi Germany and in Imperial Japan that would have prevented the development of alternative points of view regarding those countries' aggression.

So, in the 1950s and 1960s, there was this clear view that propaganda was dangerous, that propaganda had played an important role in starting World War II. And if you look back at the debates, it's fascinating that the delegates at these international conventions were animated by trying to prevent propaganda from igniting World War III. Here we are today, President Biden last week raised that possibility, right, that, if we don't manage this right, certain escalation paths could lead to a global conflict. So that's where the origins of this provision lie.

There was extremely heated debate between basically three groups of countries during the negotiation of this provision, the former Soviet bloc countries, let's call it the Western liberal democracies and then the newly decolonized countries of the global south. This is at the end of the Second World War. So, India and lots of countries in Sub-Saharan Africa and Latin America became independent.

So the Soviet Union was the prime advocate of some kind of prohibition on propaganda. And there was a lot of debate that ultimately resulted in this formulation being the treaty. And if you read it carefully, it's not an absolute prohibition. It's not like the prohibition on slavery in international law where it says slavery is prohibited. It says, "Any propaganda for war shall be prohibited by law." So it suggests that governments need to do something.

They need to enact laws that prohibit war propaganda, whatever that means. And we can talk about the interpretive difficulties in how we define this. I will say that mostly governments in the Western world have not legislated provisions to implement this. And in fact, the United States has been very, very suspicious of this provision and has made what's called a reservation in to the ICCPR basically saying that, "We interpret this consistent with the First Amendment, which requires us to do very little beyond the American legal test for incitement", right, which is a pretty demanding test.

Justin Hendrix:

I happen to have open in front of me a paragraph from a dissent of a Soviet judge in the acquittals of two indicted organizations in the Nuremberg Trials, the Reich Cabinet and the General Staff/High Command, and three Nazi defendants. This is from Judge Iona Nikitchenko who wrote:

The dissemination of provocative lies and the systematic deception of public opinion were as necessary to the Hitlerites for the realization of their plans as were the production of armaments and the drafting of military plans. Without propaganda, founded on the total eclipse of the freedom of press and of speech, it would not have been possible for German Fascism to realize its aggressive intentions, to lay the groundwork and then to put to practice the war crimes and the crimes against humanity.

Do you think we're seeing something similar go on with Russia at the moment, this total eclipse of the freedom of the press and of speech?

Vivek Krishnamurthy:

I mean, in a word, yes. And I think this is the great historical irony of the moment, right? The Soviet Union was the greatest victim of the Second World War. They lost more people than any other country. I think 40 million people– Soviet people– died in the Second World War. So the Soviet Union was the prime advocate in the postwar period of a pretty comprehensive ban on war propaganda.

Now, of course, by that time, though, some of the interests of the Soviet state had shifted from merely preventing World War III to also preventing Western ideas and influence from entering the Soviet Union. But what's really interesting is that if you look at former Soviet countries, these are the countries that have enacted these legal bans on war propaganda and amazingly, Russia, the Russian criminal code makes it a criminal offense to plan, prepare for, embark on or conduct a war of aggression, or to incite a war of aggression.

Russia has this criminal code provision that, of course, they're not enforcing. And I would argue, to just go back to your question, that yes, we are seeing a dramatic closing of civic space in Russia, right? I mean, I think other people have noted it. I think Fiona Hill made this point– she's much more expert on Russia than I am– that for much of his reign, Putin has actually tolerated a relatively vibrant– for an authoritarian state– public sphere in Russia, where you could access many sources of international information.

Certainly, you couldn't protest, but certain discussions were had, and we're seeing that close very rapidly, with the beginning of the war on Ukraine, with of course the blocking of platforms, with the blocking of independent media outlets. And this goes back to, again, those early postwar understandings of propaganda, again, as not just the advocacy for aggressive war, but that closing, I think is really important to keep in mind. That we're talking about state action that is trying to limit public discussion and the spread of information so that the only information available is what supports the warlike ambitions of the current regime.

Justin Hendrix:

So even since you published this piece, which was March the 12th, there have been more significant moves by the social media platforms to limit the distribution of Russian state media, including RT, Sputnik across Facebook, Instagram, Twitter, TikTok, others, and YouTube. Do you think there's still work to be done? Do you think the platforms still have work to do in this regard?

Vivek Krishnamurthy:

So this is a very rapidly evolving space. So, I think for some time platforms have been pursuing, I guess what some people in the lingo call the non-binary policy levers to deal with state media. So it's been, I think it's actually almost two years now since what was then Facebook, now Meta, decided to down rank Russian state media content,. And with the initiation of hostilities, we saw a number of platforms move quite aggressively to, again, further down rank, to label content, to demonetize it, et cetera.

I think the big shift that we have seen in the last few days is a decision by YouTube to go further, right, to go just beyond the non-binary policy options to actually deciding that they would block or de-platform RT and Sputnik content globally. That, to me, is a watershed moment and of course, there's lots of different equities here that the platforms are trying to balance.

So on the one hand, there's an ... so let's accept it's hard to define what war is and what propaganda is, but let's just assume for a moment that RT and Sputnik are spreading this stuff. Well, there's still an interest in knowing what the Russians are thinking about. What is being said? So there's clearly the freedom of expression at the international level, as it does in the United States, includes a right to receive ideas and information. So, any platform action that completely bans the availability of that content certainly has an impact on viewers in terms of being informed and knowing what's going on, what the other side is thinking.

At the same time, platforms are trying to stay open in Russia. And they're trying to calibrate their responses to various forms of Russian governmental abuse of their platforms in such a way that they don't get completely blocked.

And I think this has been a fine line that they're trying to maintain. One that I think is ultimately untenable, because it seems pretty clear to me that the direction of travel that Russia is headed towards is towards basically having total control of their domestic information environment. I think they are very clearly on a pathway of trying to block all forms of dissent, all forms of independent information. So I'm not sure that the platforms, their attempts to calibrate these responses are going to be successful in view of where Russia seems to be going.

Justin Hendrix:

Are there other precedents or other international legal considerations that have been made in this space? I'm thinking in particular of the OSCE report on propaganda and freedom of the media.

Vivek Krishnamurthy:

So the OSCE is the Organization for Security and Co-operation in Europe. This is a really interesting international body that formed during the Cold War, during the period of detente in the 1970s. And there was something called the Helsinki Final Act, which is this agreement between the Soviet Union and a lot of Western countries on unwinding the Cold War to some extent, deescalating it and increasing understanding.

And one of the big pieces of the OSCE's mandate coming out of the Helsinki Final Act is around information, free expression, communication, et cetera. So the OSCE has a special representative for media freedom, which is analogous to the role played by the UN Special Rapporteur on Free Expression, formerly David Kaye, now Irene Kahn. So a body that has expertise and convening power on these issues.

So about in 2016, I just have to read part of the introduction of this report, which is that it's a report to help states in formulating policy responses to where the current spread of propaganda intertwined with the conflict in Ukraine, in 2016. So as we were discussing before we started, yes, we are in a hot war that started two weeks ago, almost three weeks ago now, which is an escalation of a conflict between Russia and Ukraine that's been going on for eight years, since Russia invaded Crimea and detached the Eastern districts of Ukraine, into the two self proclaimed People's Republics and certainly Russian propaganda around the 2014 armed conflict ever since has been seen as problematic.

So the OSCE put together this report, again, a few years ago, trying to think about ‘what do we do about this,’ right? And it's, what's known in the lingo as a non-paper, which means it's a discussion or trial balloon paper, rather than the policy of the OSCE member states, and there's a few important pieces in it. I think one is a recognition of the dangers of propaganda to free media and free expression. And I think that, again, the thing about propaganda is that it's like disinformation. It pollutes the information ecosystem. It makes it hard to tell what's true and what's not.

So, Americans might call it low value speech in American free expression discourse, a free speech doctrine. But it does pollute the information environment and require a response. And it can clearly create the conditions for war. But, at the same time, there's countervailing risks as to how we respond to it, right, that, again, the word propaganda is an inconvenient word because it has this pejorative sense that we have attributed to it. When someone is speaking in a way that's trying to convince other people without resort to factual argumentation, we call that propaganda colloquially. So the use of that term in the law is difficult.

But of course, as with other information disorders, I think there's this concern that the cure is sometimes worse than the disease, that we can't narrowly target propagandistic activities without burdening other legitimate expression or creating precedents that are dangerous in the hands of authoritarian governments. It can slip down into this moral relativistic situation where, well, how do we define propaganda? Where you're saying, ‘I think it's propaganda, so I'm going to ban the BBC,’ because that might be how Vladimir Putin views that.

Justin Hendrix:

So one of the more significant things that has taken place in the tech policy space in the last couple of weeks since this invasion began happened last week when some leaked content moderation guidance from Meta was made public in a report from Reuters and the report itself was updated.

I think it’s safe to say that the initial report spread very quickly and created an opening for the Russian government to position itself as taking action against Meta almost defensively. They claimed that Meta was allowing extremist behavior and calls for violence against Russians. What did you make of that incident? And does any of this legal apparatus help you think about that particular episode?

Vivek Krishnamurthy:

So I think we're talking about the report that Meta was relaxing enforcement of its policy against incitement to violence in Ukraine and neighboring Eastern European countries. So to permit people in those countries to basically say, "Kill the Russian soldiers," or words to that effect, while maintaining a ban on advocating for the violence against Russian civilians.

As with many other aspects of tech company content policies, there is a sense of trying to respond in real time to what's happening. And a lot of it feels ad hoc and convenient, rather than based on deep principle. And I think we can trace this conversation back in many ways to what happened with Cloudflare after Charlottesville, right? When Matthew Prince very famously declared that he would no longer do business with the neo-Nazis and the white nationalists.

And there's the sense that, well, this might be the right decision, but I'm concerned that it's just coming out of thin air, that there's no policy rationale that underpins it. So I guess part of what I was hoping to do in the piece that I published is to say that, "Okay, if we can take existing legal frameworks and use them to guide our decision making, we can end up in a much better place and have coherence and actually be able to push back on that Russian argument that's saying that, 'This is just convenient. You're carrying the water of the United States or whoever else in this conflict, rather than being a principal actor'."

So as it happens, the Article 20 framework is really helpful in thinking about how platforms should respond to all of these things. So we know that international law has a problem with war propaganda, and that has been interpreted to mean advocacy of aggressive war. And we have a war of aggression in Russia. This is an easy case for the application of this prohibition, because there's a pretty strong international consensus that Russia has started an illegal war of aggression.

And we can point to the UN General Assembly resolution, where only four other countries, all pariah states, voted with Russia. The international consensus is clearly that Russia has declared an illegal war. So, the Article 20 prohibition of war propaganda has been interpreted as applying to wars of aggression, but it clearly excludes advocacy of self defense. So the best international legal commentary says, "If you are engaged in advocacy or propaganda or whatever in exercise of the right of a state to self defense, the ban on propaganda does not apply to you."

And this is a great way of basically taking Meta's policy decision and applying it to a framework, which is to say that there is something distinguishable between Russia advocating for the use of force in Ukraine, which violates international law versus Ukrainians saying, "Hey, let's go and use lethal force against the Russian invaders". I mean, first of all, the underlying use of force by Ukraine is lawful under international law. Ukraine has a right to defend itself. That's clear. There's a pretty strong international legal consensus on that.

So similarly, the advocacy by people in Ukraine to go and use violence against the Russian military is justifiable. And I think, if Meta had grounded its decision in those international legal interpretations, I think it would've gotten a lot less pushback. So, I mean, looking at the tech policy commentary on Twitter, I think the reaction was pretty negative. It felt like yet another instance of a tech company just feeling which direction the wind was blowing rather than engaging principle in decision making. But I think that's the enduring value of law generally, and of trying to attach yourself to the law as you decide what's permitted, and what's not on your platform.

Justin Hendrix:

It's interesting, I've seen other commentators suggest similar things that some of the platforms have seemed to make decisions somewhat in deference, perhaps, to national governments that have, of course, called on them to take more action or what have you. And then it might have been better to rather do as you say, which is to defer to international human rights law or to some broader human rights standard.

Vivek Krishnamurthy:

I mean, I think it would be fine or it's certainly defensible as a moral stance to say, "We are going to choose a side.” Most big tech platforms are based in the United States. So to say that we're going to follow the policy of the United States here is certainly one defensible way of making a decision. That said, I think it's perhaps not the best way for tech companies that operate on a global scale and who wish to operate globally to basically say that they're going to be guided by the policy making and policy direction of their home state or of any given state.

Which is, I think why there has been this groundswell of opinion. David Kaye, as a former UN Special Rapporteur in Free Expression played a huge part in this, in making this argument that technology companies ought to look to international human rights law for guidance on these kinds of questions. And I mean, obviously that predates David's mandate. The work of the Global Network Initiative, which was founded in 2010 is also devoted to this notion that international human rights law should guide how companies react to censorship requests and also to government demands for user data.

Justin Hendrix:

So just a couple of last questions. One thing that I've been curious about is the way that certain companies have handled, or have not handled state accounts that are associated with say a particular embassies or consulates. The Russian state has so many Twitter handles that are associated with particular national embassies or consulates or other types of bits of its state apparatus. And each of them is essentially operating as a state media outlet or channel, and in some cases spreading what is very obviously disinformation or propaganda. What do you think the platform should be doing about those types of accounts?

Vivek Krishnamurthy:

I think this is an extremely difficult question where reasonable people can disagree. And here's a way that we can think about this. RT and Sputnik are both state media outlets that operate under the guise of being traditional media, when they actually are not. So Phil Howard and Mona Elswah at the Oxford Internet Institute, wrote this great paper a couple years ago about the organizational behavior of RT. And it asks the question, not what's on RT on any given day, is this piece, is this story propaganda or not? It rather looks at the organization and its structure, and finds that even though RT holds itself out to be a media outlet that is like another cable news channel, like CNN or MSNBC, it's an instrument of the Russian state. And that its journalists are hired not for their journalistic skills, but for their adherence with an incentive system that basically seeks to push the Russian government view on things with pervasive Russian government control.

Whereas, if we tweet Sergey Lavrov saying that on Twitter, the foreign minister, we see that it's an assertion by a Russian government official. It doesn't have that imprimatur of coming from something that resembles a media organization. Now, I honestly don't know what they should do with these particular accounts. I think there was a case to be made for de-platforming them as was done with Donald Trump. Arguably, the information and the false information they are spreading is more dangerous than Trump's tweets on January 6th.

Justin Hendrix:

So if Twitter's executives, or if Nick Clegg called you from Meta tomorrow, what do you think you'd say to them? What would be the first bit of advice that you'd give?

Vivek Krishnamurthy:

But again, to align policy and decision making to the extent possible, to the requirements of laws. So I think again, and we've seen this more broadly in how platforms set their content policies. Trying to seek a way of aligning themselves with international human rights standards. Now, one thing that is different here is that there's a war going on, right. International law recognizes that war time is quite different from peace time.

And it may well be, I mean, it's crazy to contemplate this, but it may be necessary for platforms to develop policies around what happens with regard to interstate armed conflicts. So, all of the platforms have had a reckoning over their role in, let's call it civil conflict, in Myanmar and Ethiopia and many other places, where there's been internal conflicts and platforms have been used to foment genocide within a country.

What I don't think anyone has contemplated in the tech world, right, is what do we do? We thought interstate war was a thing of the past. Clearly it’s not. And if we take Russia's assertions at face value that they have grander ambitions to remake the security architecture of Eastern Europe, I think it's time that we do some proactive policy thinking about how we're going to deal with this in the future.

Justin Hendrix:

Vivek, thank you very much for speaking to me about this.

Vivek Krishnamurthy:

Oh, it's been my pleasure.

Authors

Justin Hendrix
Justin Hendrix is CEO and Editor of Tech Policy Press, a new nonprofit media venture concerned with the intersection of technology and democracy. Previously, he was Executive Director of NYC Media Lab. He spent over a decade at The Economist in roles including Vice President, Business Development & ...

Topics