Facebook's White Supremacist Problem
Justin Hendrix / Aug 30, 2022Audio of this conversation is available via your favorite podcast service.
The Tech Transparency Project (TTP), a research initiative of the nonprofit Campaign for Accountability, is focused on holding major tech companies to account– including Meta, the company that operates Facebook, Instagram, and WhatsApp. For instance, TTP collected what it calls Facebook’s ‘broken promises’ on issues ranging from bullying and harassment to fraud and deception to violence and incitement.
A new TTP report released this month, Facebook Profits from White Supremacist Groups, says the company is “failing to remove white supremacist groups and is often profiting from searches for them on its platform,” exposing how it “fosters and benefits from domestic extremism.” To hear more about the findings in the report, I spoke to Katie Paul, TTP’s Director.
What follows is a lightly edited transcript of our discussion.
Justin Hendrix:
Katie, tell me a little bit about you. How did you come into this role?
Katie Paul:
I did not start my life in the tech world. I actually started wanting to become an archaeologist with a PhD. I was focusing on the ancient world, certainly not new technology. I started looking at social media when I was studying anthropology in grad school during the Arab spring and particularly looking at social media and the trafficking of antiquities.
After years of doing that, I really started looking into how Facebook, in particular, had become one of the largest digital black markets I'd ever seen for antiquities being trafficked from the Middle East in particularly complex zones like Syria and Libya, and trying to understand why something so egregiously illegal and a war crime was allowed to continue on the platform is what drove me into understanding tech policy. And being in DC, I was positioned to learn a lot about tech policy and finding that these types of problems are really endemic across platforms. But particularly with Facebook, whether you're looking at wildlife trafficking or human remains trafficking or antiquities or drugs, the problems are all the same and it's really about the platform flaws. So that's how I got into the tech world. I went from the ancient world to modern world quite quickly.
Justin Hendrix:
Tell me a little bit about the TTP, the Tech Transparency Project. How did it come to be, and what does it get up to?
Katie Paul:
So the Tech Transparency Project was formed really to give the public a seat at the table. It started out several years ago as the Google Transparency Project, looking at this massive platform that had a significant impact on all of our lives. Google has become a verb, go Google something, and understanding what the real power and implications of that platform were. And as the tech world grew, it was clear that it wasn't just Google and more of other technology companies like Facebook that had this kind of power. And so the Tech Transparency Project was born out of the Google Transparency Project to use open source resources, FOIA, and really give the public a seat to the table when it comes to these major companies that impact our lives. And that's looking at everything from the harms of what's on these platforms and how they're designed to how they're executives and the companies are influencing legislation and revolving door, how they're engaging with the government that we elect to represent us and what that means for how it impacts us.
Justin Hendrix:
So I've invited you here today to talk a little bit about this latest report,Facebook Profits from White Supremacist Groups. Long a problem on Facebook, and yet you've sort of brought to light new evidence. Tell me a little bit about the kind of key findings, the top level on this.
Katie Paul:
So this is not the first time we've looked at white supremacy on Facebook. We did a first look at this two years ago in May 2020, trying to understand just the scale of designated white supremacist groups that exist on the platform. And of course, we found the presence of plenty of white supremacist groups. But the more concerning findings particularly in this report were, one, that Facebook is actually profiting off of searches for these groups. This is more than two years after the company's own internal civil rights audit. And Facebook has actually found a way to essentially monetize its failure to enforce its own policies on this kind of content.
Concerningly, some of those searches were monetized with ads for minority groups. So searches for KKK affiliated groups were monetized with advertisements for Black churches repeatedly, which was something that was quite concerning because of what we know about how certain neighborhoods are targeted by white supremacists. The Buffalo shooter was a one example of that. Another really big finding that is something we've seen repeatedly, whether it's militia or extremists, is that Facebook is actually generating content for these groups. Now, we know that there's a lot with regard to Section 230 and companies getting that kind of protection from liability for third party content, but what we found once again is that Facebook is the only tech platform that actually creates business pages for extremist groups. And about 20% of the content we found had been auto generated by Facebook.
Justin Hendrix:
So this is Facebook's automated systems essentially pushing out, whenever it identifies an organization that has a page or what have you, pushing out an automated stub for that entity?
Katie Paul:
Well, essentially what happens is when you list your job on Facebook, for instance, I'm Katie, if I listed Tech Transparency Project, if my firm did not already have a page on Facebook, Facebook wants to link that to something. It wants to get that data. It wants to be able to get people's clicks and understand what pages they're going to. So it actually just creates a page where one does not exist. And they've been doing this for many years. Back in 2019, there was an SEC whistleblower filing that found that Facebook was auto generating pages for groups like Al Qaeda. It even pulled in images from Wikipedia of the bombed out USS Cole. Similarly, we found some of the auto generated pages here had iconography for white supremacist groups. This is not something that's old. Some of these pages were generated a decade ago. Others were generated less than a year ago.
And so this is a problem, that the companies continuing to create these pages despite the fact that it's been raised over and over again. In fact, Senator Dick Durbin brought it up to Mark Zuckerberg just about a year ago and asked why the platform was continuing to create this. And of course, in response, he got a canned answer about how much hate content they identify themselves before it's taken down.
Justin Hendrix:
So tell me a little bit about your methodology. I know you've worked with a couple of different databases, the SPLC, the Southern Property Law Center's Hate Map, but also an internal list that I suppose if it were not for the investigative journalists at The Intercept, that we wouldn't have seen.
Katie Paul:
That's right. What we want to do is not just use the platform like a user would, but we wanted to have a more concrete methodology to the groups we were searching. And so we used the latest map of the SPLC's Hate Map and focused on groups related to white supremacy, Neo-confederacy racism more broadly. Similarly, we incorporated the ADL's hate symbols database in used similar groups. But in addition to that which was different from our 2020 report, is that we did use the intercept list of Facebook's dangerous individuals and organizations. Now, Facebook doesn't categorize these based on what type of hate group they are, they just say hate group. And we focus specifically of their hate group list, on the ones that are English language or in the US because the search monetization feature we were looking at, Facebook explicitly said it is triggered by it's key set of English and Spanish language search terms so we wanted to keep it to English language groups.
In 2020, when we released our report, Facebook's rebuttal to the Huffington Post inquiring about it was that, "Well, we don't have the same list they do. They have their own internal list, which at the time wasn't public, nor would they make it public. And so we were just working off of what these groups had stated and that it didn't match with what Facebook did." I will note that despite that statement, they then quietly went and removed over 80% of the pages we identified.
This year, when we were able to do this, we know what Facebook's internal list looks like because of that great report from the intercept. So not only did we incorporate that into our broader data set of this focus, but throughout the report, we also broke out how these features, whether it's search monetization or auto gen affected the groups that were just on Facebook's list. So Facebook likes to tell media, "Well, still this group doesn't match their list." That's right. So we looked specifically at their list, and unfortunately, many times the numbers were actually worse. We found that the groups that were on Facebook's list only had a higher presence on the platform than the broader list that we had put together incorporating SPLC and ADL.
Justin Hendrix:
So back in 2020 in a Senate hearing Mark Zuckerberg was asked about some of these issues, and he said, "If people are searching for, I think for example, white supremacist organizations, which we banned those, we treat them as terrorist organizations. Not only are we not going to show that content, but I think we try to, where we can, highlight information that would be helpful." So here's Zuckerberg saying they've got it under control.
Katie Paul:
Well, Zuckerberg has said that in 2020. He's also made similar statements in 2021. But the problem with Facebook is that they don't have this under control. And I think that between the reporting from groups like TTP and other outside researchers as well as what we now know from the Facebook files and the whistleblower documents from Frances Haugen, not only is it that Facebook knows from the outside that this is happening, but internally they've known that they've had problems for many years and have chosen not to fix them in the name of profit. We saw that with the whistleblower docs regarding human smuggling and other issues. So this is really just a continuation of that internal strategy.
And what we're seeing with regard to Zuckerberg's statement is that that's not in fact true that they address these problems. Sure, they ban lots of things. Facebook bans drugs being sold on the platform. They ban terrorism. All of these things exist on the platform. A ban does not necessarily mean an enforcement. And in cases where Facebook has said they look to redirect people to other resources, one of their biggest highlights to the public was that they have this redirect feature, to redirect people to anti hate resources when they search for these groups. But that only worked in 30 some percent of the searches just for the Facebook listed groups. So even for the groups on their own list, that when you key in that search term, it's not triggering a redirect. That leaves a lot of questions about their willingness to apply those basic features even to their own designated extremist groups.
Justin Hendrix:
So we know from whistleblowers including Frances Haugen that Facebook generally is not making the investments at the scale that would be necessary to address any number of harms on its platform. I'm talking to you on the morning that another whistleblower has come forward around Twitter with similar concerns or similar evidence that the company was aware that at least internally it isn't well-resourced enough around various problems. What do you think needs to happen here? You conclude here, of course, that Facebook is not doing what it says it's going to do, but what do you think it should do?
Katie Paul:
Well, I think there are a few questions that we need to consider. One is what the platform should do and whether you mean that morally. In fact, they should ensure that there's not animal abuse and white supremacy operating on their platform. But Facebook should start with not amplifying this content. There's nothing that says that the platform has to amplify or profit from content. Facebook loves to bring up free speech and as to why it doesn't moderate everything. And this isn't just about the presence of this content. That was something that we really tried to highlight throughout the report. It's not about the fact that this content simply exists, but the fact that Facebook amplifies it with its related pages tools, it actually creates content related to these groups and it profits from failing to enforce its policies. None of which have anything to do with free speech or moderation issues. That's really a design of the platform.
So addressing that design is key, but obviously Facebook needs to invest in more people to address these problems. We know that Facebook has these teams of content moderators through third party providers. They're often not experts on these subjects. And time and again we've seen where small groups of experts that know how to look for this content can easily find how much of it is on the platform can target the types of networks. And there's no reason that a multi-billion dollar company that's had increases of revenue year after year shouldn't be able to put those kinds of investments in the platform. I think what we're seeing now with the pushback on Facebook, and particularly with the fact that they're actually losing stock value for the first time, is that Facebook's failure to invest for over a decade now in actually addressing these problems is starting to hurt them.
Advertisers are not as interested when the companies targeting features aren't working the way that they promised because of the changes with Apple. Advertisers and users don't want to be engaging on a platform that's amplifying hate, that's amplifying terrorism and harm. And then there's the harms to children, of course, that we know that Facebook has identified and chose not to address when it came to Instagram. And so these are all issues that it seems that the chickens are coming home to roost with Facebook's failure to invest in expertise on this front. And when small nonprofits and small teams of researchers can identify these major problems with little efforts, it's very clear that even a minute increase in investment would have an impact if Facebook actually listens to the people that are supposed to keep the platform safe.
Justin Hendrix:
In the same week I believe that your report was published, there was another report from a civil society group on threats to LGBTQ groups on Facebook. There was a report from advanced democracy on election disinformation on Facebook and other platforms. It really does seem like this platform, while it makes the right promises, just again and again and again, there's this sort of failure to address at scale or failure to implement its own policies. What should government be doing in your view? Does TTP have a legislative agenda?
Katie Paul:
TTP does not have a legislative agenda. We are a nonpartisan organization. I think it's very clear just from what we've seen in the public and from what we're seeing from Congress, there's desire to regulate tech. It's a wild west right now. There's really nothing governing how these companies operate. Historically, even when there have been lawsuits brought, there's the Section 230 question, right? The sword and the shield, the promise that these companies will be free from responsibility as long as they're using their sword to take down the bad content. And what we've seen time and again is that despite these promises from Facebook, they're not effectively taking down that content. They're not addressing that. And not just that, but Facebook, unlike it's tech partners in the rest of Silicon Valley, it's the only company that's actively creating content, which there raises a lot of questions about whether those pages for terrorist groups or white supremacist groups would even be protected under Section 230 because Facebook itself has generated that content. It's not from a third party.
But ultimately, there does need to be some sort of governing mechanism for these companies to operate and ensure they're not increasing harm. I mean, Just within the US, we see people... We see individuals in the EU making efforts at this. Australia is making efforts. In Brazil even, Facebook was just fined for failing to address the ongoing endangered wildlife trafficking on the platform. And we're going to continue to see this. And it's really as an American company, it's Congress's role to step up and do this. But I will say, I think one thing that's worth noting is that when you look at the recent history of American politics and all the division we've seen, regulating tech seems to be the one issue where there is bipartisan support. In the Senate, in the House, you see members of Congress working together that otherwise would not because it's become a galvanizing point. And I think that because of that, there seems to be a lot of fear. I mean, tech is spending on lobbying at historically high levels and that's because they know that regulation is coming.
Justin Hendrix:
We'll see what happens after this midterm cycle. But it's been frustrating to see a number of bills that would address perhaps some of the harms that you suggest here, essentially languish.
Katie Paul:
There are dozens of bills addressing some of these tech harms. And unfortunately, because of the way that Congress is operating right now, we're seeing them stalled. That's why it's also been important to keep an eye on what's happening in the EU, which is moving a lot faster when it comes to tech and is also a major market for Facebook's operations. So it's important to look at these other big markets that can have regulatory sway. I mean, for instance, GDPR, which was focused in the EU still has positive impacts on the way that we deal with things. Every time you have to click accept for cookies, for tracking and things like that, these are all issues that people, until just a few years ago, weren't aware that they were being followed on the internet to this extent. And so the EU has power here to help drive policies that help us. But really it's Congress that needs to do something particularly because we're talking about an American company.
Justin Hendrix:
Is there a broader sense in which Facebook supports white supremacy? I've often thought that... And this is I'm asking you perhaps to speculate here. But when you look at the sort of history of the company's deference on a policy level in some cases to internal executives like Joel Kaplan and others who have sought to protect the interests of a particular party inside of Facebook, it has kind of really, I suppose, created this set of conditions where perhaps the company is less likely to act on certain content for fear that it might be seen to be political. Do you think that's part of this problem? Is that sort of shaping the broader context?
Katie Paul:
I do think that the company's failure or act on certain content because of concerns of looking biased towards one party or another certainly impact the way it addresses this. But I think probably the larger impact is that Facebook is all about profits. And what we've seen time and again including from Frances Haugen's documents is that even when there was a fix, a technological fix for a harm on the platform, Facebook chose not to implement it because it would cut into the bottom line. And when you're talking about a platform that is driven by engagement, there are a few things that drive anger and engagement and comments more than white supremacy and racism. And so, these are big engagement drivers and therefore profit drivers on the platform as well. Anytime you have divisive content, you're going to have people on the platform for longer. That's more eyeballs on the news feed to see more ads to increase the ad value.
It's not just an issue with racism in America. We're seeing this with hatred and extremism in other countries. We see it with Facebook's role in the genocide in Myanmar and the amplification of that hateful rhetoric. We're seeing it in Kenya. There was a recent report about how Facebook is playing that role in Kenya. What's concerning is that globally, Facebook is really critical infrastructure, communications infrastructure in a lot of these countries. It has become that for the US as well. There are large swaths of the American population, particularly people, my parents' age, who only use Facebook. They're just now catching up to that technology. So while you can place the blame on other platforms, Facebook really holds a large share of the blame because it has ensured that it has become critical communications infrastructure both at home and globally, which means it also should have a large share of the responsibility in keeping its platform clean of that kind of harm.
Justin Hendrix:
So Tech Transparency Project is publishing frequently. You're always coming out with new reports. What is next?
Katie Paul:
What is next? Well, we have a lot in the pipeline. We don't just look at the bad stuff on platforms. We look at the other things like Apple and their supply chain issues, another piece of technology that affects our daily lives. We also are going to be looking at how years later we're seeing an increase of militia content on Facebook yet again. The closer we get to an election, the more concerning this kind of issue becomes. There's no more question about whether these platform's amplifications lead to real world violence. I think January 6th was the perfect example of that.:
But it's not just Facebook, the topic of today, but I think that there are other platforms like YouTube that escape a lot of scrutiny and are not getting the same level of attention even though they are significant amplifiers of harmful content. I mean even with the recent removal of Andrew Tate, I think YouTube was one of the last ones to weigh in on that issue. And these companies also, when one company moves on these issues, you see the others scrambling to catch up and there's a certain amount of them holding one another accountable. We hope to see that with regard to some of these things. But when it comes to white supremacy, none of these companies really seem to be holding one another accountable, and in fact, continue to profit off of the problem.
Justin Hendrix:
Well, I'm sure there'll be a lot more evidence for you to excavate over the next few weeks and months. But Katie Paul, thank you so much for speaking to me today.
Katie Paul:
Thanks so much for having me.
Social image: August 12, 2017: A member of a white supremacist group at a white nationalist rally that turned violent resulting in one death and multiple injuries. Kim Kelley-Wagner, Shutterstock.