Home

New report finds asymmetry in social media moderation favors dominant groups

Justin Hendrix / Aug 4, 2021

This week, YouTube CEO Susan Wojcicki penned an op-ed in the Wall Street Journal assuring the reader that that her company is "working to protect our community while enabling new and diverse voices to break through." But a new report from the Brennan Center for Justice at the NYU School of Law, entitled Double Standards in Social Media Content Moderation, finds that social media companies such as YouTube, Facebook and Twitter often apply content moderation polices in a manner that disadvantages marginalized groups with mass takedowns of content, "while more dominant individuals and groups benefit from more nuanced approaches like warning labels or temporary demonetization."

Tech Policy Press has recently covered this phenomenon in Palestine and in Sri Lanka, for example. In Palestine, Facebook and Twitter "wrongly blocked or restricted millions of mostly pro-Palestinian posts and accounts related to the crisis," according to the Washington Post. "The companies blamed the errors on glitches in artificial intelligence software." Similarly, activists have documented how YouTube has removed Syrian history.

Whether the excuses the platforms offer for these actions are valid is an open question according to the Brennan Center report's authors, Ángel Díaz and Laura Hecht-Felella. They observe that whether "any of the platforms have subjected their automated tools to peer review, third-party audits, or any form of independent review for accuracy or efficacy is unclear. For example, despite Facebook’s assurances that only 'clear' violations are automatically removed, in practice, there is little understanding of the confidence thresholds that trigger automatic removal, how often human moderators override algorithmic determinations, or whether automated tools are being employed globally despite limited training data across languages and cultures."

What's more, the transparency reports from Facebook, YouTube and Google that the companies laud "in their current form fail to provide the information necessary to evaluate who is most affected by enforcement decisions, whether some communities are disproportionately targeted by hate speech or harassment, and whether automated tools are making more mistakes when assessing certain categories of content," says the report.

In addition to its global focus, the report documents multiple ways in which the platforms fail to act on coordinated harms that impact minorities in the United States, contrasting this phenomenon with the "sweeping removals" that target "dangerous organizations" perceived to be associated with marginalized communities. It finds Facebook, for instance, has only acted narrowly on white supremacy, likely because "taking a meaningful stand against white supremacy would require Facebook to remove content from users with powerful political support or with significant followings within the United States." CEO Mark Zuckerberg personally intervened in favor of Alex Jones, a hate figure who is an ally of former President Donald Trump, even after Facebook's policy team recommended adding Jones to a list of "dangerous individuals and organizations."

Similarly, the report notes that Twitter did not take its most significant action against the QAnon movement until after the January 6 insurrection at the U.S. Capitol, when it finally removed thousands of accounts. And a case study on YouTube's "checkered approach" to dealing with the far-right personalities finds the company exercised its discretion- instead of enforcing its own rules- in the case of an alt-right YouTuber who targeted a journalist for harassment.

The report goes on to make a number of recommendations, including legislative ones. Some evoke existing legislative proposals, such as rules for more clear policies and appeals processes for content moderation, and mandates for transparency reporting. One key recommendation focuses on access to data, and calls on Congress "to establish a federal commission to investigate how to best facilitate the disclosure of platform data to enable independent research and auditing, protect privacy, and establish whistleblower protection for company employees and contractors who expose unlawful practices."

There are also recommendations to the platforms. One is for the platforms to recognize the asymmetries in their practices and shift their policies to more equitably protect marginalized communities. This requires the companies to do basics such as bolstering linguistic and cultural competencies, but most importantly it demands they "acknowledge and document the unique ways in which minority communities are most susceptible to harassment, violence, and hate speech and the ways in which such content can result in both offline and online harms." The report also recommends an inversion of the status quo in the moderation of public figures- putting a higher standard on influential leaders. And, it calls for more consistency and transparency around dealing with terrorist and violent extremist content.

As for Wojcicki's assurances, after conducting this analysis one of the report's authors is skeptical.

"Where diverse voices are able to break through or document human rights abuses, it can be in spite of content policies that limit their ability to use the platform," said Ángel Díaz in an interview. "YouTube’s rules too-readily associate political speech or news reporting with terrorism in some cases, but can fail to account for how harassment limits creators of color’s freedom of expression in others. Our analysis of YouTube’s content moderation practices finds that the company applies an iron fist to marginalized groups while reserving its more measured approaches for powerful figures and their supporters."

Authors

Justin Hendrix
Justin Hendrix is CEO and Editor of Tech Policy Press, a new nonprofit media venture concerned with the intersection of technology and democracy. Previously, he was Executive Director of NYC Media Lab. He spent over a decade at The Economist in roles including Vice President, Business Development & ...

Topics