Home

Donate

Making Social Media Safer Requires Meaningful Transparency

Matt Motyl, Jeff Allen, Jenn Louie, Spencer Gurley, Sofia Bonilla / Oct 2, 2024

In 2021, Frances Haugen–the former Facebook employee who testified to the US Senate–shared internal documents showing that Instagram knew it was having a negative impact on millions of teenagers. Two years later, Wall Street Journal reporters Jeff Horwitz and Katherine Blunt and researchers at Stanford University and the University of Massachusetts Amherst reported vast networks of people using Instagram to share child sexual abuse material (CSAM). More than a year later, Instagram launched “Teen Accounts,” which purports to give parents more oversight into their teenagers’ use of the platform and to minimize adults’ ability to contact teenagers on the platform. Yet, many experts are skeptical of the usefulness of this launch. For example, Zvika Krieger, former director of Meta’s Responsible Innovation team, stated, “I don’t want to say that it’s worthless or cosmetic, but I do think it doesn’t solve all the problems.”

Unfortunately, it is nearly impossible for external researchers to evaluate the effectiveness of Teen Accounts, or any other launch from Meta and other social media companies. Currently, social media platforms make it easier for pedophiles to share child sexual abuse material, extremists to incite violence, and governments to execute genocides than traditional media. Simultaneously, the way these platforms work makes it challenging to assess whether they are making progress in mitigating these harms.

But it doesn't have to be this way. It's possible to change how social media companies make decisions. The first step towards a safer social internet is mandating meaningful transparency to incentivize companies to design safer products and create accountability when they fall short.

Social media companies have integrity teams and trust and safety experts who work to make platforms safer. From inside the companies, they protect users and societies from foreign interference, scams, and illegal content. They see the causes of online harms and the impact that company decisions have on platform safety. And they understand how these companies make decisions.

The Integrity Institute, a professional community and think tank that comprises over 400 trust and safety experts, found that 77% of these experts ranked transparency about the scale and cause of harm as the most critical public policy step towards making a safer social internet. This might seem surprising: why don't the experts want policy makers to simply mandate safer design practices?

The reason experts want transparency over mandated design practices is because the safest design practices depend on the nature of the platform and context of how problems manifest. Sometimes, a chronologically-ranked feed is safer than an algorithmically-ranked one. And sometimes not. What we need is for companies to be incentivized to choose safer, more responsible design choices and to resource harm mitigation efforts properly, empowering the people working on platforms to find the right mitigations.

Some companies proactively produce their own “transparency centers.” This is progress, but no company currently shares enough data to verify that their platforms are safe, or to monitor harmful and illegal activities. The limited data that social media companies share helps them make 3 claims that downplay risks on their platforms.

One claim is that the prevalence of harmful content is minuscule. Instagram claims the prevalence of suicide and self-injury (SSI) content is below 0.05%–a number implying effective content moderation. However, typical Instagram users can view thousands of pieces of content each month. A very small prevalence can still translate to hundreds millions of exposures to harmful content.

Meaningful transparency requires that companies disclose how many total exposures there are to violating content, not just the prevalence. Societies have a right to know the magnitude of the risks that the platforms could be creating.

Another claim is that the companies remove huge amounts of harmful content. For example, Meta reports removing 49.2 million Facebook child sexual exploitation posts in 2023. Companies appear to want people to conflate removing large numbers of posts with an effective harm-reduction strategy. However, effective harm-reduction strategies ensure that few people are exposed to harmful content, regardless of how much content needs to be removed. It is possible that those 49.2 million Facebook posts were seen by a negligible number of people, or that those posts were viewed by all 2 billion users. The truth is somewhere in between, but a confidence interval spanning 2 billion people is unacceptably wide.

Meaningful transparency requires that companies disclose how many people were exposed to harmful content on the platform and how many people were exposed to high levels of harmful content.

The last claim emphasizes the large sum of money these companies allege spending on protecting users. Recently, TikTok announced a $2 billion investment in trust and safety in 2024. US Senator Lindsey Graham (R-SC) quipped that these numbers are meaningless without context, saying, “$2 billion sounds like a lot unless you make $100 billion.”

This claim fails to disclose why users are exposed to harmful content. If most of the exposures to harmful content are due to algorithmic recommendations of content that the platform makes to users, then it doesn't matter that the company is investing billions into mitigation efforts. This is akin to an arsonist bragging about how much money they've spent repairing the buildings they've set ablaze.

Until we have meaningful transparency, our best tool is independent research. For example, the Neely Social Media Index surveys adults using standardized questions about their experiences across platforms to reveal whether harmful experiences are decreasing (or increasing) on specific platforms. These independent efforts are important, but insufficient because they are disconnected from how harmful experiences occur and lack critical platform data.

Meaningful transparency connects the dots.

Authors

Matt Motyl
Matt Motyl is a Resident Fellow of Research and Policy at the Integrity Institute and Senior Advisor to the Psychology of Technology Institute at the University of Southern California’s Neely Center for Ethical Leadership and Decision-Making. Before joining the Integrity Institute and the Neely Cent...
Jeff Allen
Jeff Allen is co-founder and chief science officer at the Integrity Institute. A former physicist and astronomer, Allen left academia for data science in 2013 and has since worked on multiple sides of the internet information ecosystem, including for publishers, platforms, platforms, and political o...
Jenn Louie
Jenn Louie is the founder of the Moral Innovation Lab, based on her research initiated at Harvard Divinity School, and works as a Product Manager at the Berkman Klein Center's Applied Social Media Lab. Her research is a compassionate interrogation into how technology is shaping our moral futures and...
Spencer Gurley
Spencer Gurley has been in the intersection of technology, policy, and integrity since beginning his research in ethical AI at the University of California, Santa Cruz in 2019. He started as a data policy analyst before joining the Integrity Institute as a research associate working on a myriad of i...
Sofia Bonilla
Sofia Bonilla is communications lead at the Integrity Institute. Her background covers policy, research, and project management within the fields of international affairs and human rights. She has experience with major public and private sector projects that tackle labor rights issues in global supp...

Topics