Home

Donate

On Social Media, Transparency Reporting is Anything But Transparent

Jordan Kraemer / Feb 15, 2024

The mission of the ADL Center for Technology and Society (CTS) is to make the Internet a safe, just, and equitable place for all by monitoring and combating the alarming rise of all forms of online antisemitism and hate—from social media to online gaming platforms. We have been fighting online harassment since 2017 through research that informs our policy recommendations and the way we engage with all stakeholders and partners, including industry, civil society, government, and targeted communities.

But the problem is getting worse. In the past year alone, reports for each type of online harassment increased by nearly every measure and within almost every demographic group. Our annual survey of online hate and harassment found that, in 2023, just over half of adults reported experiencing hate or harassment online at some point in their lives, the highest number we have seen in four years, and up from 40% in 2022. Both adults and teens also reported being harassed within the past twelve months, up from 23% in 2022 to 33% in 2023 for adults and 36% to 51% for teens.

Tragic experience has shown that what happens online doesn’t stay online. Hate, harassment, and abuse on social media platforms have real consequences offline, ranging from fear of physical safety, increased anxiety, and suicidal thoughts. Our 2023 survey found that one in five of the teenagers who experienced hate and harassment mentioned having suffered emotional and mental health challenges as a result. Online hate and harassment have in some cases also led to violence offline. For example, the shooter who attacked a Tops supermarket in Buffalo in May 2022 was fed a stream of hate on platforms such as Discord and 4chan. And there is evidence that both the 2018 Tree of Life synagogue shooter and the 2019 Christchurch shooter were radicalized online, on platforms such as Gab and 8chan.

This is not a new issue, but one that we – and other partners in the field – have been raising for years. CEOs of the social media companies have all made repeated promises and said that they were working on making their platforms safer, especially for children and young people. To support their claims, they share transparency reports that are supposed to serve as proof that their content moderation systems are as adequate and effective as they claim. But just how transparent are these reports?

This is the exact question we set out to examine in our latest report, Platform Transparency Reporting – Just How Transparent? Our research shows that nine of the world’s largest social media companies provide the public with insufficient information on how they enforce their policies against hate speech, harassment, and other harmful content. They only publish partial information about how much hateful content they remove or action in some way. These reports are also often difficult to find, do not contain enough contextual information, and avoid providing detailed insights into how platforms address hate and harassment. To make matters worse, there is no way to independently verify most of the social media companies’ claims: users, researchers, and civil society must take companies at their own word.

Facebook for instance, claims that in the third quarter of 2023 they proactively removed 94.8% of hate speech occurrences. Similar claims are regularly made by other companies. Meanwhile, research for our 2023 Hate and Harassment survey showed that Facebook was the social media platform where most of the harassment took place. While this finding does not necessarily contradict Meta-owned Facebook’s (and other platforms’) claims, it still raises questions about their methods and the scarce information they choose to share with the public (although the recent introduction of their Meta Content Library and API for vetted researchers is a step in the right direction).

Until recently, social media companies had little legal incentive to publish transparency reports. Since October 2017, however, Germany’s Network Enforcement Law has required social media platforms with more than two million users in the country to publish reports about their content moderation efforts. Similarly, the European Union’s Digital Services Act and California’s AB 587, both of which have imposed reporting requirements starting in the past six months, are more sweeping regulations that require companies to report regularly on how large online platforms are enforcing their terms of service. In step with these laws taking full effect, the ADL Center for Tech and Society seeks to establish a baseline understanding of what many existing transparency reports do well and how they could improve to better adhere to impending regulation.

Our new report sets out what transparency reporting – if truly committed to user safety and security – would look like. We share our approach to evaluation and briefly summarize each company’s performance. When transparency reports are consistent, thorough, and easy-to-understand, civil society and governments can determine whether there are gaps between platforms’ guidelines and how those guidelines are enforced. Only when we have this information will we be closer to a more equitable Internet for all.

One image from a recent hearing on Capitol Hill that addressed online safety went viral. It depicted Mark Zuckerberg, chief executive of Meta, standing at a Senate Judiciary Committee Hearing and turning to face the families of victims of online abuse. “I’m sorry for everything you have all been through,” Mr. Zuckerberg said to them. “No one should go through the things that your families have suffered,” he added, claiming Meta was continuing to work on those issues to prevent others from experiencing this kind of harrowing pain.

The recommendations we share in our report, if implemented, would help Facebook and other companies move in that direction, so that no other tech CEO ever needs to apologize to grieving families again.

Authors

Jordan Kraemer
Jordan Kraemer, PhD, is Director of Research at ADL’s Center for Technology & Society and an anthropologist of emerging media. As a 501(c)(3) nonprofit, ADL takes no position in support of or in opposition to any candidate for elected office. The views expressed here do not necessarily represent the...

Topics