Home

Donate

Facebook whistleblower Sophie Zhang testifies to UK Parliament

Justin Hendrix / Oct 19, 2021

Facebook whistleblower Sophie Zhang, a former employee of the company who told The Guardian's Julia Carrie Wong about rampant political manipulation on the platform, testified yesterday to a UK Parliament Joint Committee that was appointed to consider a draft Bill to establish a new regulatory framework to address digital harms. The session was led by Chair Damian Collins, a conservative MP who has led past inquiries into disinformation, privacy, and other matters related to technology and democracy.

Below is a rough transcript of the recorded discussion prepared by Tech Policy Press.

Damian Collins:

Well, now onto the final panel for today's evidence session. And we are pleased to welcome Sophie Zhang, who is testifying remotely to the committee. Sophie, just by way of introducing you, you're a former Facebook employee, you've been a whistleblower who spoke out against practices at the company that you saw. Just for the benefit of the record and people who are following this, your role principally at Facebook was as part of civic protection, looking to identify networks of inauthentic accounts that were coordinating and spreading information on Facebook, in different countries around the world. Is that a fair description of your principle function at Facebook? Or is there something you'd like to add to that?

Sophie Zhang:

So I'd just like to add to that, although that is what I was doing, it was in my spare time and I was essentially moonlighting in it. My actual job was as a data scientist on the fake engagement team, which was focused on the prevention of inauthentic activity, but this authentic activity was predominantly non-civic because most people are not politicians and most discussions are not political. I hope this makes sense.

Damian Collins:

Yes. It would be fair to say that from what I've seen from what you've written and said, the bulk of this activity within the company is directed at preventing and removing spam content rather than necessarily protecting societies.

Sophie Zhang:

So my team was intended at preventing spam. Although our mandate was defined rather broadly, that was the intention and area of the team. And I was essentially moonlighting in a separate area that was technically in my purview, but I was not expected to do.

Damian Collins:

Now, when you initially spoke out about this, when you wrote about this on an internal blog post at Facebook, which was then quite widely reported, you highlighted a number of coordinated campaigns that made inauthentic accounts that have been used, particularly in Honduras, in Brazil, in Uzbekistan. These networks were being used to spread disinformation within those countries that could have had a negative effect on democratic society or influenced the outcome of elections? Is that correct?

Sophie Zhang:

That is correct, with the caveat that I think you meant Azerbaijan instead of Uzbekistan. I did not do work in Uzbekistan.

Damian Collins:

Sorry, you're quite right, I've got my 'stans mixed up there. In general terms, and this is a particularly pertinent question for us in the UK at the moment--particularly following the awful murder of our colleague, David Amess, on Friday-- whilst you’re looking at a system, in this case networks of authentic accounts, do you believe that the way the platform works at the moment is that it tends to radicalize opinion, it promotes extremist ideas, whether through organic posting, buildings of people creating inauthentic campaigns or bot accounts, but they are being used to create divisions in societies to undermine those societies and to allow the distribution on an unnaturally large scale of extreme ideas?

Sophie Zhang:

So, before I answer that question, I’m going to try and break it down to make some distinctions, because it sounds like you're concerned about hate speech, misinformation and other ideas that have increasingly radicalized people and resulted in incidents like the extremely tragic murder of an MP. And, that is distinct from inauthentic activity because radical extremist content, hate speech and misinformation, this is a function of content, as in what the person is saying. To give an example, if someone writes on social media, "cats are the same species as dogs," this is misinformation regardless of who is saying it. It's not very harmful misinformation, but it's still misinformation. It doesn't matter if the prime minister said that, if a cat fanciers club said it, anyone. In contrast, inauthentic activity, it's a function of who the person is. It doesn't matter what the person is saying. If I create tens of thousands of fake accounts on Facebook and use them to spread the message, "cats are adorable," this is a perfectly legitimate message, except for the fact that I'm using fake accounts to spread it. And ultimately Facebook would be correct to take these down, regardless of how much I yell afterwards that Facebook is censoring cute cats.

And so these two areas are commonly confused with each other. There exists a public perception and stereotype that fake accounts are used to spread misinformation, that a considerable proportion of misinformation is spread by inauthentic activity. Like most stereotypes, my personal experience is that this is incorrect. That most misinformation is spread by real people who tragically, genuinely believe it, that most hate speech and the like is spread by real people who tragically, genuinely believe it, and that inauthentic activity, fake accounts, et cetera, are used mostly to spread activity that is otherwise benign in terms of the realm of discussion.

And I do want to also differentiate from several other types of inauthenticity that people might be confused about. A common accusation is that people spreading misinformation or fake or hate speech do not genuinely do not genuinely believe it, but are doing so for their own purposes. And this may be true, but this is entirely separate from the sort of social media inauthenticity that I'm talking about it. Because in this case, people are still saying things under their real names.

And now that I have broken down the question and spoken about it, I'm going to actually answer the question, and apologies for that digression. I think it's not controversial to say that Facebook, that social media in general has rewritten the rules about how information is spread and distributed.

And before, in the past, when topics of discussion were to go public, there were gatekeepers. For instance, established media would decide whether to report on it or not. If you said “the moon is made out of cheese” or something absurd like that, it did not matter how much people believed you, the established media would not report on you seriously. And so it would be very difficult to get your claims out. And today with social media, those gatekeepers have broken down. I don't think it should be controversial. It's the fundamental, the conservative idea-- the idea of Chesterton's fence-- that not all changes are good, and that when you want to make changes, you should sometimes understand beforehand what the ramifications of those changes are and what the existing system-- why the existing system is in place for a reason.

The breakdowns of the gatekeepers have had positive effects as well. I mean, certain types of speech were taboo in the past. For instance, LGBT issues were not widely discussed in the media and the public a mere 50 years ago. But at the same time, the breakdown of gatekeepers has also allowed for the increased distribution and spread of radical and damaging ideas. Because today, people are concerned about free speech, with regards to what you can post on social media. But I personally see this as a smoke screen and distraction, because what's, the concern is really, it's not free speech, but rather a freedom of distribution. In the past, if neo Nazis were allowed to speak out, people were not worried that the ideas would spread widely and be disseminated and reach others, but today there exists that concern.

No one has the right to freedom of distribution. I mean, just because the Guardian doesn't want to publish you doesn't mean that you're being censored. And so I want to be clear that this isn't the area of my expertise, but others have talked about the way that social media algorithms create an incentive for people to write discussions that are sensationalist or attention drawing, or emotion grabbing. And one of the easiest ways to do that, sadly, is making bold claims that fall into the realm of misinformation or hate speech or the like. And so this was absolutely not my purview and remit, but if the committee is interested in this, what I would personally suggest is considering areas to decrease virality, such as requiring social media companies to use chronological newsfeeds, or potentially limiting the number of reshares, so that if someone on Facebook reshares a post, and then you look at the shared version and then share it as well, maybe after that you have to go to the original post to share it. I hope this is making sense.

Damian Collins:

Where you saw cases of networks of accounts spreading disinformation, or spreading hate speech, whatever it was, how often was inauthentic accounts, a factor in that-- I appreciate you are principally looking at inauthentic authentic accounts, rather than disinformation, or hate speech as categories of content, and you said earlier on that you think that the biggest problem is real people posting this, but what do you think is the role of these inauthentic accounts in, in boosting content that other people have created to create a bigger audience for them?

Sophie Zhang:

So I want to break this down again, because I did not work on hate speech or misinformation primarily, and to the extent that I worked on it, it was generally only because others were concerned that these messages were being spread by fake accounts. And as I said, that is a bit of a stereotype and like most stereotypes, I don't see any evidence for it to be correct. And so I'm going to give an example that was from the United Kingdom, and so may be familiar to you. Not a case I worked on, not hate speech, but misinformation. In late 2019, in the lead up to the general election, there was a piece of misinformation that spread around widely in relationship to the story of the Leeds hospital incident, in which I believe a baby was put on the floor, and the misinformation went something along the lines of, “I have a good friend, who's a senior nursing assistant at Leeds hospital, and this is incorrect” et cetera.

And this was spread around by being copied, pasted by many different people who of course did not all have good friends in the nursing hospital. When this came up and was quickly debunked, it was very concerning. And many people thought that it was spread by the use of fake accounts. And so this was something that I was putonto initially, to look for the possibility of fake accounts being used to spread this. And this was something that I, and others looked into, and we did not find any evidence of fake accounts. I do want to be clear that, not finding evidence is not the same thing as being certain that it doesn't exist because, just in the same way that a police officer would not be able to establish for certain that someone is not a criminal-- you could always argue that they are hiding extremely well and that they have simply hid their misdeeds well enough. So I worked on many cases of hate speech or misinformation, mostly misinformation that were alleged to have been spread via fake accounts. And in essentially all of them, I did not find any notable fake accounts.

Damian Collins:

So if I could just ask, finally from me, because I know other members want to come in as well. Just on some of the things you did work on directly. We take, for example, the network of accounts being operated at Honduras to favor the president of Honduras. You were very concerned about that. You've raised that with Facebook and it took nine months for that to be addressed.

Sophie Zhang:

It took eleven and a half months-- it took nine months to start the investigation. Sorry.

Damian Collins:

So could you say, I mean, you said you took that up to vice president level within the company. Who were the most senior people you spoke to about that and your attempt to try and get this issue taken seriously?

Sophie Zhang:

I personally briefed by vice president Guy Rosen on the issue. Guy Rosen is the vice president of integrity at Facebook.

Damian Collins:

And after you briefed him, it would appear that wasn't enough for him to take any action.

Sophie Zhang:

The general trend that I would describe is that everyone agreed that the situation was terrible, but people were not convinced that this was worth giving more priority to whether Facebook should act, et cetera. There was mostly agreement that this was terrible, but no agreement on what actions should be taken and how much of a priority it should be.

Damian Collins:

They agreed it was terrible, but they didn't think it was necessarily worth Facebook's time or investment to do anything about it.

Sophie Zhang:

That is the way I would personally describe it. Or at least not doing anything about it in a timely fashion, because it was taken down, even if it returned immediately afterwards.

Damian Collins:

And is that because you think not only does it involve resources to take it down, but also this fake engagement could--this fake content, these fake accounts could be driving engagement on the platform?

Sophie Zhang:

I don't believe that was an area of concern because it's a tiny minuscule fraction compared to the overall amount of activity on Facebook. Like, these were thousands of fake assets, which sounds like a very large number until you realize that Facebook has something like two or 3 billion users. I doubt that the idea went across their minds, to be perfectly frank, with regards to their failures to prioritize it. My personal guess is that it was primarily due to the time of taking it down- but there were perhaps some political considerations, because this was after all the president of a nation, though a very small one.

Damian Collins:

Kind of finally from me, given what you said before, it sounds like you had concerns about the resources Facebook, the number of people involved in checking content. You complained about the fact you were often making decisions on your own about what should and shouldn't be done. Do you think, firstly, do you think the company needs to put more resource into this? And secondly, the Wall Street Journal reported yesterday that Facebook has become too reliant on AI for content moderation and that Facebook's AI systems only catch a very small single figure percentage of the sort of harmful content that should be removed. And I just wondered what your thoughts were on that.

Sophie Zhang:

Yeah, absolutely. So just to break this down, with regards to the use of AI, the vast majority of Facebook moderation for content based matters, it's done using artificial intelligence. By content matters, I mean hate speech and misinformation, but I also mean, for instance, spam-- people trying to sell you things online, often scams; I mean nudity and pornography; I mean, websites and links that send you to malware websites, and that sort of thing. And these are easy. These are relatively easy to do comparatively, so to moderate via AI, but there exist differences. For instance, the level of enforcement between nations and what they mean is that if you want, if you want an AI to determine if content is hate speech, you need an AI that can speak that language, or at least have data in that language to classify.

And of course the resources differ considerably between these nations, and in addition, the definition of hate speech at the company may not agree with the public’s widely held definition. For instance as of a year or two ago, according to Facebook's policies, the phrase "men are trash" was hate speech, and Holocaust denial was not hate speech, which I would hazard a guess that very few people would agree with. I did not work personally on hate speech. So I don't know the other factors at play with regards to the researcher's complaints, that the large majority of them were not caught. Another concern that I would personally express regarding hate speech that others have also expressed is the company's focus on driving down the total volume of hate speech is not necessarily the way to go, in that the risk of hate speech ultimately-- it's not that many people will see it, but some people will see it and since they like it very often become radicalized by it. And so others have proposed instead focusing on the people who see very, very large numbers of hate speech and other extremist radicalizing content on a day-to-day basis and focusing on that number specifically. That seems like a good idea to me. And I am very sorry. There was another part to your question. I forgot what it was.

Damian Collins:

That's fine but just on that, you think that's technically easy for them to do? Rather than looking at hate speech as a total thing to say, we can identify people who are heavy consumers of hate speech, and maybe being radicalized by it. You think that’s something that they've got the technical capability to do?

Sophie Zhang:

I think they probably have the technical capability to do it, for instance, in English. If they have the technical capability to do it in every language? That seems a bit unlikely to me, although they could increase it very quickly. Like ultimately it takes resources to do this and such work on integrity and investigations and CVE takedowns, they are chronically under-resourced which I think is a statement on the company's priorities. You don't hear about the ads marketing team at Facebook being chronically under-resourced, for instance.

Damian Collins:

Indeed. Dean Russell.

Dean Russell:

Thank you, Chair. And thank you for your testimony today. One of the parts that is core to this bill that we need to get right, is the legislation to make sure that organizations-- specifically Facebook in this instance-- do the right thing. And my question to you is about the culture you mentioned, you've been pretty much to the top to raise concerns about democracy. Would you say that Facebook has a culture that would rather protect itself than protect democracy or protect society? And if so, how robust do we need to be in this bill to make sure that they follow the rules rather than potentially creating loopholes that they will work around?

Sophie Zhang:

Absolutely. So I'd just like to take a step back and to remind people that you're asking the question, is a company whose official goal is to make money more focused on protecting itself and its ability to make money, or to protect democracy. Like, we don't expect Philip Morris tobacco to have a division that reimburses the NHS every time someone gets lung cancer and needs to be treated, we don't expect Barings bank to keep the world economy from crashing. That's why Britain has its own bank. And so I think it's important to remember that Facebook is ultimately a company, it's goal is to make money. And to the extent that it cares about protecting democracy, it's because the people at Facebook are human and need to sleep at the end of the night. And also because if democracy is negatively impacted, that can create news articles, which impact Facebook's ability to make money.

With that said, in terms of, for instance, changing the culture at Facebook, or at least creating measures by the company with regards to OFCOM to regulate the company, I have several suggestions that I could make there. The first is in terms of requiring the companies to apply its policies consistently, which is I believe, in clauses 9 through 14 of the bill, because the idea that fake accounts should be taken down-- this was written into Facebook's policies and what they saw was that there was a perverse effect in that if I found fake accounts that were not directly tied to any political leader or figure, they were often easier to take down than if I found fake accounts that were.

And so, this created a perverse effect in that it creates an incentive for major political figures to essentially create a crime openly. Imagine a situation in which suppose a burglar robs a bank, then the police would hopefully arrest them very quickly. but suppose a burglar robs a bank, and the burglar is a member of parliament who is not wearing a mask and openly shows his face, and the police decides to take a year to arrest him because they are not sure about arresting a member of parliament-- that is essentially the analogy going on at Facebook.

And so others have made a proposal to require companies over a certain size to separate product policy and outreach and government affairs, because at Facebook, the people charged with making important decisions about what the rules are and how the rules get enforced are the same key people as those charged with keeping good relationships with local politicians and governmental members, which creates a natural conflict of interest. Because Facebook is a private company, but so is, for instance, the Telegraph, the Guardian, et cetera-- those organizations keep-- at least, I hope-- keep the editorial department very separate from the business department. And the idea of the Telegraph killing a story because it made a good politician look bad, at least to me it is unthinkable and I hope it would be to the other members of the committee, but of course you know better than myself.

Dean Russell:

And if I may just very briefly, because cause I'm conscious of the colleagues that want to come in, but do you think it would focus the minds of the senior leadership in Facebook if they were liable for the harm that they do, both to individuals and society of the, of what happens within Facebook, for example, do you think that the situation you shared earlier with the elections would have happened not in 10 months, but perhaps overnight, if they were liable for the the impact of that?

Sophie Zhang:

Potentially, but it depends on precisely how they are liable, and it depends on how precisely the rules are enforced, and what I mean is that the OFCOM bill-- I mean, sorry, the online safety bill, as I understand it-- is focused on liability for harm in the United Kingdom, which is an approach that can make sense for the United Kingdom, as it has robust institutions and robust cultures; but of course Honduras is not the United Kingdom, Azerbaijan, it's not the United Kingdom. These are authoritarian countries. I see it as highly unlikely that Honduras or Azerbaijan would take an approach that requires Facebook to take down the inauthentic networks of their own governments. So the other point that I want to raise is that it's how it is enforced. Because I mean, I've read the text of the bill. It took quite awhile, but in my understanding, it's that the first way of enforcing is self assessment by the company in terms of record of reports, in clauses 7 and 19. And so this may not be reliable and it may actually create an incentive for companies to avoid acknowledging problems internally. And what I mean is that if you bury your head in the sand and pretend that the problem doesn't exist, then you don't have to report as much to OFCOM, because if you look for crime, you are more likely to find it. And so companies will have an incentive to look for less. And so with regards to enforcing, I have two separate proposals that may be difficult to apply, but I'm going to make them nevertheless. The first is to try and independently verify how much the ability of each platform to catch bad activity by having OFCOM conduct red team style penetration test operations on certain types of illegal activity.

What they mean by that is this. If you want to find out how good each platform is at stopping terrorist content, then you have OFCOM send experts on social media to send terrorist content in controlled, secure manners, and see what percentage of them are taken down and caught. And then you can see Facebook took down 15%. Twitter took down 5%. Reddit took down 13%, I'm making up these numbers, of course. And in that case, you can say, oh, these are terrible, but Facebook is the best. We need to focus on the companies that are less good at this. And you could take the same approach with, for instance, child pornography, and it can also be used on the reverse. For instance, if you're worried about, about harassment, harassment, if you could, you could, you could have, you could have people report benign content to see what it's done by what it's done to it. If the content is incorrectly taken down. And so ultimately the goal is to take down the most violating content while having the least home down to real people. Because of course you could stop everything bad overnight by banning social media in Britain, but that is obviously not what we want to do.

The second proposal that I would make is to require companies to provide data access to trusted researchers and provide funding for such researchers to have more independent verification. However, this does create some privacy risks. Aleksandr Kogan after alI was also a university researcher.

Jim Knight:

Thank you very much for appearing before us and indeed for reading the whole bill-- very impressive indeed. Do you think the bill should be amended to include in scope disinformation which has a societal impact as well as an individual one?

Sophie Zhang:

I think that's a very difficult question. Right now that presumably falls under clause 46 of the bill, which details the banning of content harm to adults. And so my concern is how do you define this? Because these definitions are highly subjective and they may be difficult for companies to determine. Right now I think they are based on companies' definition of what they believe, which creates an obvious gap for OFCOM enforcement, in that companies can argue, well, we don't think that's bad after all. I don't know the legalities involved and the regulation involved.

I would note that for most social media platforms, at least the use of fake accounts, especially to spread messages inauthentically-- this is already banned, the question is more enforcement. I mean, there aren't many of those that are not fully enforced. I believe it is illegal to bare arms and armor into the Parliament, but there is no guard, presumably there aren't guards at the door checking for it in this modern day and age.

So part of the issue also is that this committee is naturally focused on Britain. And so where I found the most harm was predominantly not in Britain, but in countries in what's called the global south whose authoritarian governments were creating activity to manipulate their own citizenry. With regards to activity in Britain, what this could be targeting is, for instance, foreign inauthentic activity. And so ultimately, I don't know if that's the best approach. It may be a better approach, for instance, to require companies to coordinate very closely with MI5 or MI6 in defending Britain's security, if that is the specific concern, but I am not a regulator and I am not a legislator, and I don't have good familiarity with the issues involved in a topic as big and potentially subjective as banning disinformation.

Jim Knight:

Okay. Thank you. The bill, as you will recall imposes duties to protect content of democratic importance, I'm interested in how you think a company like Facebook might interpret that, particularly given that, you know, misinformation from and fake content that you've been working on in my view damages democracy. And you could say that you could interpret the duty to say, well, we should allow all political content because that's safeguarding the democratic importance, or you could say, no, we need to work harder on fake accounts in order to protect democracy from harm, which, which sort of direction do you think a company like Facebook would go?

Sophie Zhang:

I think that Facebook would interpret it in a way that favors what Facebook is already doing. And so in this context, this would turn into, for instance, protecting, and disseminating content that contains information on, for instance, when to vote, what the elections are and, and, and where the voting locations are, and potentially also protecting content that is controversial by public figures and politicians, and do the official justification that, that we, that we should allow people to speak out openly even when they are important figures, that it's essentially what Facebook is already doing. And I hope this makes sense.

Jim Knight:

Well, one last question. You talked right at the beginning about the difference between freedom of expression, and I think freedom of distribution, whatever the phrase was. A lot of discussion is around the response of platforms should be to take stuff down, but clearly there are other actions that can be taken to prevent the amplification of content to prevent things being shared. Do you have any advice for us on the sorts of things that platforms can do short of take downs so that they're protecting freedom of expression, protecting political content, but also protecting us from harm.

Sophie Zhang:

This is a thorny question, because what companies can do theoretically involves things like reducing the distribution of certain types of content by making them seen to fewer users, and this has, of course raised concerns and controversies over, over what people call shadow banning in the United States say, I presume, I don't know if you've heard something about it, similarly in Britain, but at least in the United States, it is somewhat controversial. And, ultimately it is not always very reliable either. And what I mean by that is that for instance, when misinformation gets fact checked and then has its distribution reduced naturally, the fact checkers do not have the time to fact check every single piece of content. And so they naturally focus on what's popular. And so when something is fact checked and seen to be misinformation, it's distribution is reduced by, by the time. And the fact checking label appended-- "this has been fact checked by this organization. You can see it here." The issue is that by the time this has happened, the content has already been popular enough to be fact checked in the first place.

Jim Knight:

And I'm sorry to interrupt, but is it viable, do you think, to require platforms to distribute back to where they know the erroneous content was shared the fact check information so that then people can say, “Oh, okay. I did see that, but now I see that that was false.”

Sophie Zhang:

It could definitely be possible. My, my, my question is, would that be useful, because there has been research done that that's sometimes when content is fact checked, people do not believe the fact check and instead dig in their heels. My concern with this approach, it's focused on the action to reduce the distribution of misinformation, but in that case, that distribution is reduced when the content has already become popular, oftentimes. And so it is the equivalent of closing the barn door after the animal has escaped. And it's a difficult question because fundamentally companies, of course, do not, cannot adjudicate every single piece of content and you probably would not want them to do so either.

And so ultimately that is why my personal proposals and suggestions have fallen in more, another lines of reducing virality in general, by reducing reshares. Bufor instance, requiring people to go to the initial post to reshare a piece of content, rather than being able to click on a reshare and reshare it again. And for instance, going to chronological newsfeed rankings, because ultimately the problem at hand is not that the content is being made in the first place, but it's been, but that it's been seen and widely distributed, and the people have an incentive to make potentially sensational things.

Damian Collins:

Thank you. Thank you. Beeban Kidron.

Beeban Kidron:

Hi, Sophie, and thank you for your contribution. It's absolutely fascinating. I just want to go back on a couple of things that you said. The first bit is you did, right at the beginning, you gave us a fantastic explanation of the difference between hate speech, misinformation and inauthentic, but I'd like you to, just for the record, say what you think the primary harms relating to inauthentic spread of information-- just for the record, where is the harm in it?

Sophie Zhang:

So just to be clear, when you say inauthentic spread of information, you are speaking about inauthentic activity, not misinformation.

Beeban Kidron:

Activity, indeed.

Sophie Zhang:

And so these have several types of potential harm. There are several types of inauthentic activity that I'm going to broadly break down.

So the word trolls is used -- I mean, sorry, the word bots in the modern day, it's used to describe two very different types of activity. One, literal bots, which that is computer scripts that have no real human behind them, and they are also used to refer to groups of people who are paid and sitting behind a desk to do something like for instance, Russian bots. And these are actually very different types of activity that differ considerably in, in type scope and behavior. Scripts are very good at creating activity in which a very large volume and very, very bad at creating activity that is actually intelligent or smart.

If the committee were to replace its staff with computer generated reports, it will be able to generate a very large amount of reports that would be completely useless, which I think is perhaps a good analogy to the impact of scripted activity. So I have not personally seen any troll farms-- essentially networks of paid users-- that are run out of the United Kingdom, which it's not to say that they don't exist. It’s possible, for instance, that they were hiding very well because Facebook and other companies do pay a lot of attention to the United Kingdom in a way that they don't to others, countries such as Nigeria or Honduras. And at the same time, it's also true that Britain has more of a culture that does not accept such activities.

And furthermore the phones and labor are expensive in Britain. What I mean is that in India, people can buy a geophone for the equivalent of 10 or 15 pounds, and someone can be hired very cheaply, and this would be far more expensive in the United Kingdom, of course. And so going back to the actual question, with regards to harms of inauthentic activity, the types of inauthentic activity that I personally found in Britain were very few and, and the main case that happened, that I already described to the committee chair, was that in 2019, in the lead up to the British general election, there was there was a candidate, there was a candidate for Parliament that received that received a large number of fake follows from Bangladesh, from Bangladeshi fake accounts.

And I want to be very clear right now that this had absolutely no effect on the outcome of the election in my personal expertise and view. With that said, potential reasons for the possible effects, to me the main concern here, is an increase in credibility-- and what I mean is that Britain is a multi-party electoral system. In 2019, pro European candidate voters needed to decide, if I don't want the Tories to win, do I vote for Labour? Do I vote for the Liberal Democrats, do I vote green? And conversely people who are Euro-skeptics needed to decide, for instance, do I, do I vote for Tories? Do I vote for the Reform party or the Brexit party or whatever they're calling themselves now? Sorry.

Beeban Kidron:

Are you saying that, that, that the harm is that someone appears to be more popular than they are?

Sophie Zhang:

Exactly. Exactly. And so, because the appearance of popularity is important in the multi-party political system like Britain, if someone needs to decide, do I vote for this candidate or another who both through my views, I just want to know who has the best chance of it in my constituency, one way that they will do so, presumably, is looking on Facebook. And if someone has, has 1000 likes, had one thousand followers on Facebook, that is very different from someone who has 4,000 or 8,000 followers, in terms of credibility. So there's also, of course, other aspects of inauthentic activity others have heard about, such as the spread of Russian inauthentic activity to further certain narratives.

This has gotten a lot, this has gotten a lot of attention and my personal, my personal assessment it's gotten perhaps too much attention sometimes. And I'm going to use an anecdote to illustrate again, in, in the lead up to the 2019 elections, there was a case that people in this committee may be familiar of, of what was Boris Bots in which there were alleged to be fake accounts, posting certain messages in support of prime minister, Boris Johnson, in response to his Facebook posts, but put messages such as winning Boris 100%, I support Boris 100%, et cetera. These were not bots. These were real Britons who believed that it would be extremely funny to troll their political opponents by pretending to be fake accounts and for the purpose of arousing fears. But of course, it raised the attention of Britain-- I believe the BBC eventually wrote an article about it. I was asked to urgently investigate it something like six times; after the first two I gave up, because it was very clearly the same thing over and over again.

Beeban Kidron:

And I don't want to interrupt you, but I just want to get to the question of harm, because I noticed, you know, I noticed in the notes that, you know, when you went to Facebook, they said, "This is a minor matter. This is not important." And I'm just trying to get to why it is important rather than, you know, why it isn't, if you like.

Sophie Zhang:

Different types importance. It degrades the democratic conversation, it harms the civic discourse. If people don't know who to trust online, then, then, then they're unable to trust anything in at all. And that is, and that is, can be very harmful in a society like the United States or the United Kingdom. We take it relatively for granted in our society. But in authoritarian countries, you don't know if people are really what they say they are. Perhaps they are informants for the government, perhaps they're fake people who are paid by the government. And that is presumably part of what is going on in countries like Honduras and Azerbaijan.

I would compare it to the paid crowds of the Eastern bloc of yesteryear. When Ceaușescu gave his final speech in Romania, he spoke to a crowd of a hundred thousand people who were mostly bused in and rounded up and given placards to support him. And of course, that crowd turned on him in the middle of his speech, and so began the Romanian revolution, because when you need to have a hundred thousand people in real life, you need to get actual people. There's no way for a thousand to pretend to be one hundred thousand in the real world. And it is extremely hard to control that number of people-- but on the internet, it is very easy for a small number of people to pretend to be a very large number of people.

Beeban Kidron:

That's very helpful. Can I ask you one other question, which is just around the question of scale, and you said earlier the, that, you know, they're not that bothered because it's such a small amount of the overall people, but when it's, you know, when you've got 3.5 billion, suddenly a very small percentage is enormous. And I just wonder whether you think that they are taking seriously what you consider to be an automated harm.

Sophie Zhang:

I would say that how seriously they take it depends on multiple factors. So because ultimately Facebook is a company, to the extent that it cares about it, it's because it impacts their ability to make money and people need to sleep at the end of the night. And I want to draw a distinction here, in that most of Facebook's investigations, they happen in response to outside reports and claims. Perhaps MI5 goes to Facebook and says, there's something odd going on in Britain, perhaps in a small country an opposition group goes to Facebook and says, "There is this strange group. We don't know what it is. We think it's fake." Perhaps an NGO goes to Facebook. “Can you look into this and what's going on?” And ultimately what happens when there's an outside report is that there is someone outside the company with no loyalty to the company who can hold the company responsible. If Facebook doesn't want to act, they can tell Facebook, "Well, in that case, we're going to go to the New York Times and tell them, you don't think our country is important. What do you say to that." And certainly it will be an important priority at Facebook. This is an actual story.

In my case, it was going out on my own without recourse to outside reports. And I was looking for unusual, suspicious activity worldwide. And what I found was mostly in the Global South-- which I think is a statement both on the fact of the low hanging fruit there-- and what I mean is firstly, Facebook pays more attention to countries like the United States and Britain, but also to India because of the importance of these countries to Facebook. And secondly, that these countries have more robust institutions that can find and report strange activity.

Meanwhile, the government of Azerbaijan is not going to report to Facebook about the activity created by its own employees. And so, because my loyalties were theoretically to the company, I don't think there was pressure on Facebook the same way. The argument that I always used was this is so obvious that sooner or later people would notice-- because for instance, in Azerbaijan, even BBC Azerbaijan was the target of Azeri governmental harassment, and I always thought it was quite odd that they never noticed and reported on it, quite frankly. And so if it ever got out or it was reported- Facebook has many leaks- if it got out that Facebook sat on it for a year, it would be absolutely awful for Facebook. Of course, this became a self-fulfilling prophecy because I am speaking to you about it right now, but we didn't know that at the time.

Debbie Abrahams:

Oh, hello, Sophie. Thank you again so much for providing evidence to the committee today. My question is a little closer to home actually, it was in relation to fake accounts that might have been used in the 2016 and then the 2020 U.S. Presidential elections. I understand that there, that sort of escalated in the 2020 elections. But I wondered if you had worked on either in the run-up to the 2020 or the 2016 to identify those inauthentic accounts and how that changed.

Sophie Zhang:

So I want to be very clear on two things. First of all, I was hired by Facebook in January of 2018, so I did not do any work with regards to the 2016 elections. Second, there were dedicated people who worked on the U.S. 2020 elections, but because I was moonlighting in this area, I was not one of them. So I did not personally work on this issue, my knowledge on it is limited to what I've read in the press. I mean, I worked sometimes in related issues, for instance, for instance, in the United States, in the early spring of 2020-- I believe it was February or March or something like that-- there was that there was a page that received attention, a Facebook page that received attention in the American press, because it was alleged to be a Russian information operation. This page was spreading misinformation and notably it sometimes responded to critics in Russian Cyrilllic. So I was one of many people who investigated this and we quickly found it to be a real North Carolinian who thought it would be very funny to pretend to be a Russian, to arouse the fears of his political opponents, which I suppose is something that Britain and America sadly have in common.

Debbie Abrahams:

Okay. That's very helpful indeed. And apologies for getting my dates wrong. Then could I ask you another question then in relation, and I'm sorry if this is a little bit naive as a non-tech person-- but you've talked and you expressed very clearly the difference between authentic count. So we may not be about presenting misinformation, but they are but they're faking as much as they're distributed and amplified. In, in, in, in that regard, is there, have you noticed any, for example, once these authentic accounts are distributed, do they then morph into accounts that might also provide disinformation fake news? So you've got really a hook into two different people who might've accessed that original account do they, do they change and then provide misinformation?

Sophie Zhang:

So I want to be very clear that this is not an area that I worked on. With that said, the concept that you described exists. And I'm going to give examples from memory. For instance, suppose that there is a page on Facebook that's called I Love Cats, and it spreads cute pictures of cats and people follow this page because cats are adorable, and suddenly one day the page changes. The next day, it is, I Love the Tories-- sorry to the Tories-- or I Love the Lib Dems or I Love Labour, and it's posting content about how, about how they are great. And so in this case, the page-- I mean, there's no misinformation here, but this is still inauthentic in the sense that the page was pretending to be something in order to gain an audience and then completely changing its message to spread it to the new audience, right? And this is not something I personally worked on. There were other people at Facebook who worked on it, but the concept certainly exists, if that makes sense.

Damian Collins:

Lovely. Thank you so much. Thanks Sophie. And thank you, Sophia, certainly in response to those last questions and the questions from Baroness Kidron. I mean, you've worked on these networks of fake accounts in countries like Brazil and Honduras, as we spoken about, do you think in those countries, particularly where there's much less supervision of what goes on on social media, do you think in those countries that Facebook could be regarded as a force that is being used to undermine democracy in those countries, inasmuch as democracy exists?

Sophie Zhang:

It's ultimately a difficult question. Is Facebook being used as a tool by authoritarian governments in those countries? Yes, it is. Is Facebook used by the opposition in those countries to get their voices out? Yes, it also is. When I came forward with the network in Azerbaijan, which was focused entirely by the Azeri government on harassing the Azeri opposition, I was a bit surprised by the official response from the Azeri opposition leader, Ali Karimli. He would have every right to criticize Facebook and Mark Zuckerberg and to denounce and denounce it for enabling this authoritarian activity, but he didn't, instead of what he said was something like this, and I'm paraphrasing, ‘I thank Mark and Facebook for building this platform, Facebook allows the opposition to get our voices out. With that said, Facebook should hire someone who speaks Azeri,’ or something like that.

I'm paraphrasing from memory because, I mean, I'm sure it would have been very tempting for him to denounce Facebook, but Facebook was important in, in a country like Azerbaijan, which is essentially a one party dictatorship that is so democratic that in 2013, they accidentally released election results the day before the actual election-- I wish I were joking on that-- this is a country in which the opposition does not have other significant tools. And for all the faults of Facebook, Facebook is still valuable to them. Or take Myanmar. Of course, in Myanmar Facebook has absolutely been used to further hate speech and has allegedly created, caused conditions for genocide. At the same time, it's also true that social media has been used by the people who are from Myanmar to coordinate against the latest military coup d'etat of this year in the way that they weren't able to do so for the coup d'etat 20 years ago. And so the ultimate question of the impact, the net impact of Facebook on the societies of democracy, that is very difficult to answer. And I hope this is making sense and does not come off as a dodge.

Damian Collins:

No, it doesn't, but I think you've been very clear as well that Facebook doesn't put anything like the resources it should do into dealing with the clearly problematic and harmful areas of content and alongside that content, the networks of inauthentic accounts that are engaged in boosting it or promoting it. And on top of that, it would seem the executives in the company sort of dissuade you from investigating these issues.

Sophie Zhang:

I was never directly told no. Well, until the end when I actually was told no-- but most of the time I was never-- most of the time I was never told no. I would hazard a guess that it was a situation in which people didn't want to have an official answer on the record that would make them look bad if it were ever...

Damian Collins:

In your statement you posted, when you left the company, you said, "I was told to stop my civic work and focus on my roadmap on pain of being fired."

Sophie Zhang:

Yes. That's what I meant when I said I was eventually told no. This was at the end of 2019 and the start of 2020, before then I was never officially told no, including by the vice president. And I'm trying to be clear about this. I hope this is clear.

Damian Collins:

Yes. Okay. Are there any other questions from members? Tim.

Timothy Clement-Jones:

It's just going to ask a very quick question. So thank you very much for a fascinating session. Whether you thought there was a role for a regulator in being able to insist on preventing virality, you talked about distribution, virality. And the kind of thing I'm thinking about is a circuit breaker. Whether or not the regulator should have the power to insist on that, or whether this is just a tool which should be expected of a platform.

Sophie Zhang:

So my initial reaction is somewhat leery, just because this could set an unfortunate precedent that could be used by other authoritarian countries. Like circuit breaker tools-- Facebook does have circuit breaker tools in countries that face threats of imminent violence. It has tools to tone down the virality, but you could also imagine, for instance, a case in the Russian Federation in which if Russians protests en masse using social media to coordinate, and the Russian government insists that the social media tune down virality and inhibit activities. So, I mean, I think it's difficult. This is the question that the members of the committee should consider and discuss. But my initial reaction is just, I am leery of setting an unfortunate precedent. The legislation also contains grants for criminal penalties, for failure to comply, including imprisonment. And I'm also personally leery of that, of that could also just, just to be just to, because that too has so far been primarily used by authoritarian countries to enforce compliance.

Damian Collins:

Thank you very much, indeed. Thank you. So if you were extremely grateful for your giving evidence to us this afternoon we're going to end the session here as members wish to attend the Memorial service for our former colleague David Amess, which is taking place at six o'clock, but we're very grateful for your time and your very candid answers this afternoon.

Sophie Zhang:

Absolutely. Thank you very much. It was a pleasure and an honor.

Authors

Justin Hendrix
Justin Hendrix is CEO and Editor of Tech Policy Press, a nonprofit media venture concerned with the intersection of technology and democracy. Previously, he was Executive Director of NYC Media Lab. He spent over a decade at The Economist in roles including Vice President, Business Development & Inno...

Topics