Home

Donate

The Facebook Oversight Board is making good decisions- but does it matter?

Jillian C. York, Dia Kayyali / Jul 28, 2021

Dia Kayyali and Jillian C. York take stock of Facebook's quasi-independent Oversight Board.

As civil society advocates that have worked in the field of human rights and technology for roughly the past decade—and have sat in rooms with Facebook staff to debate a variety of issues—we are happy to see that the Facebook Oversight Board’s recommendations for how to uphold human rights are by and large congruent with our own. We would be thrilled to see the company begin to implement policies that are in line with international human rights standards. Unfortunately, our experience has shown that the company is reluctant to make changes that harm its bottom line by costing “too much”, creating any perceived legal liability, or requiring the company to stand up to governments.

That being said, we are watching the Board with an open mind, trying to determine whether Facebook is finally taking advice in a meaningful way. We’ve been waiting to see if the company will implement real changes to its policies, and whether the Oversight Board could expand into a third-party body that makes decisions for multiple platforms. After reading Facebook’s “first quarterly update” on its progress taking on the Board’s recommendations, we believe the short answer to those questions is no. The long answer is a bit more complicated.

Key Oversight Board recommendations -- and the Santa Clara Principles

The Facebook Oversight Board has been making decisions since December 2020, and is beginning to create what looks very much like a body of precedent. For courts like the United States Supreme Court, which the Board has often been compared to, precedent is “earlier laws or decisions that provide some example or rule to guide them in the case they're actually deciding.” The Board has made 13 case decisions which, so far, have largely reflected recommendations made by civil society to Facebook over the years that the company has simply not been willing to carry out.

Facebook is required to abide by the decisions the Board makes on specific pieces of content, but the heart of its work is the “non-binding policy recommendations” that come with those decisions. In its recent update on the Board’s progress, Facebook addresses 18 recommendations made by the Board on various topics.

The Santa Clara Principles on Transparency and Accountability in Content Moderation, created by civil society groups in 2018, outline “minimum levels of transparency and accountability” for content moderation along three points: numbers, notice, and appeals. They incorporate over a decade of research and recommendations from myriad organizations and experts (In fact, one of the authors of the Principles, Nicholas Suzor, now sits on the Oversight Board). Facebook endorsed these principles in 2019 but has since failed to implement them.

Out of the Oversight Board’s 18 recommendations to date, nearly every one of them echoes the Santa Clara Principles’ recommendation to disclose “detailed guidance to the community about what content is prohibited.” The Board’s recommendations also echo the Principles’ push for more transparency around the use of automation, telling users exactly what policy they violated and why, requiring human review of appeals of automated decisions, and more.

Finally, the Board’s recommendations touch on particularly hard fought debates around content moderation, including nudity, COVID-19 misinformation, and Facebook’s Dangerous Orgs and Individuals policy. Again, the recommendations echo what advocates, including both of us, have been saying for years.

Nudity and automation

The Board made several recommendations in case number 2020-004-IG-UA, a decision about a breast cancer awareness post that was removed for violating Facebook’s Community Standards on nudity. Concerns that Facebook and Instagram moderation of adult nudity is overly broad—even in the context of the Community Guidelines—have been raised by civil society actors over the years. These recommendations mainly concern the use of automation for content moderation.

The Board called on Facebook to[i]mprove the automated detection of images with text-overlay to ensure that posts raising awareness of breast cancer symptoms are not wrongly flagged for review.” This is an excellent recommendation. While the specific ask may be new, the recommendation that Facebook use its machine learning algorithms to categorize images for positive means rather than simply for removal is not a new one— is something that civil society groups have requested for quite some time.

Similarly, the Board’s decision in case number 2020-004-IG-UA also called on Facebook to ensure that users can appeal decisions taken by automated systems to a human reviewer when their posts are found to violate the company’s policy on Adult Nudity and Sexual Activity. The Santa Clara Principles specifically call for platforms to provide a mechanism for appeal, and note that “[m]inimum standards for a meaningful appeal include human review.”

Unfortunately, although Facebook claims in its update that this recommendation is fully implemented, the report also states that appeals will be reviewed by a [human] content reviewer “except in cases where we have capacity constraints related to COVID-19”—this may sound like an exception, but in fact it has been the norm since March 2019. The company also argues that automation can “be an important tool in re-reviewing content decisions”—but as any user who has been caught in an endless automation appeals loop can tell you, automated appeals are a poor substitute for human review.

“Terrorist and Violent Extremist Content”

Facebook’s internal list of Dangerous Organizations and Individuals (DOI), how it makes those designations, and how Facebook enforces policies related to terrorism have long been opaque. Amidst myriad cross-border and national policy efforts to “detect and remove” so-called “terrorist and violent extremist content” (TVEC) from the Internet, Facebook has increasingly deployed automated content moderation. Although Facebook refuses to provide clear public information on the matter, leaks demonstrate that the platform adheres uncritically to the US State Department's list of designated terrorist organizations. The question of what content should be included, much less removed, under the “TVEC” label is currently up for debate in forums like the Global Internet Forum to Counter Terrorism (GIFCT) and the Christchurch Call, but in the meantime the automated search for such content has led to the removal of large swathes of expression, including documentation of human rights violations, art, satire, and counterspeech, often without the opportunity to appeal.

Four related cases have come before the Board (case numbers 2020-005-FB-UA, 2021-003-FB-UA, 2021-001-FB-FBR, and 2021-006-IG-UA). One of them was the case about Facebook’s suspension of former US President Donald Trump, but the other three cases all involved counterspeech. The Board has made it clear to Facebook that the company should make its DOI list public, that it should be more transparent about how automation is used to moderate content in this category, that it should include that information in transparency reports, that it should notify users when removal is due to a government request, and more. So far, Facebook has made little progress. A Facebook executive touted changes to the DOI policy, but as national security law experts from the Brennan Center point out, the update doesn’t answer core questions about the policy, which “require[s] a fundamental rethink and far more transparency about enforcement, including governmental pressure.”

Based on our experience advocating on these issues over the last several years, we expect the Board will have a particularly hard time getting Facebook to make changes in this area. The Board has made fantastic recommendations, and we hope that at a minimum those recommendations will positively impact the ongoing discussions on these matters in the GIFCT and the Christchurch Call.

What next?

We understand the appeal of the Facebook external Oversight Board. It touches at the heart of the dilemma when it comes to content moderation. Advocates from the US and Europe are clamoring against platforms’ power and for government control over the decisions platforms make. We’ve seen a number of legislative proposals in that vein that could present significant threats to freedom of expression, such as the United Kingdom’s Online Harms legislation and proposals to reform Section 230 of the Communications Decency Act in the US. At the same time, advocates from most of the rest of the world, including the Southwest Asia and North Africa regions where we do a lot of our work, point out that as much as Facebook might take down political content, their own governments would happily censor much more speech. And governments in those countries, for example Turkey, have copied legislation from the EU such as the German NetzDG law and used it to censor political opposition.

Content moderation decisions have real world impacts—incitement can lead to physical harm, a call to protest can be stifled, an artist’s livelihood taken away. People’s ability to communicate information about acts of genocide or other human rights violations to the outside world depends on content moderation. It’s hard to look those people in the eye and tell them that the Oversight Board is going to help them—but the fact is, many of them could never obtain justice for violations of their freedom of expression through a court.

While we are, overall, pleased with the Board’s recommendations, Facebook is simply not responding sufficiently to complaints. That’s hardly surprising. We predicted before it launched, that without more teeth, the Board won’t achieve its real potential—upholding human rights and protecting the freedom of expression of people who could never obtain that result through courts.

For now, we’d like to see the Board ensure that new members bolster the representation of Facebook’s global user base that the Board currently lacks. And we call on the Board to call out Facebook’s lack of willingness to take on its more difficult recommendations. But we’re turning our attention elsewhere. Instead of potentially harmful legislative proposals that seek to tell platforms exactly how to do content moderation, we believe that right now governments should compel platforms to be more transparent about their operations, and to require human rights impact assessments and algorithmic audits. These provisions are being considered in the European Union in the massive Digital Services Act. With more transparency, perhaps we will have a better idea of what other legislation is truly needed. And perhaps social media councils- envisioned as ‘multistakeholder content moderation bodies’- are the answer. But amongst the cases it has taken on, the Oversight Board hasn’t made that case yet.

Authors

Jillian C. York
Jillian C. York is a writer and activist whose work examines the impact of technology on our societal and cultural values. Based in Berlin, she is the Director for International Freedom of Expression at the Electronic Frontier Foundation, a fellow at the Center for Internet & Human Rights at the Eur...
Dia Kayyali
Dia Kayyali (they/them) is a member of the Core Committee of the Christchurch Call Advisory Network, a technology and human rights consultant, and a community organizer. As a leader in the content moderation and platform accountability space, Dia’s work has focused on the real-life impact of policy ...

Topics