Home

Donate

The Unbearably High Cost of Cutting Trust & Safety Corners

Matt Motyl, Glenn Ellingson / Jan 4, 2024

Anne Fehres and Luke Conroy & AI4Media / Better Images of AI / Hidden Labour of Internet Browsing / CC-BY 4.0

In 2023, social media companies decided to cut corners by laying off thousands of employees who were experts on combating abusive behaviors and harmful content. Laying experts off may have saved these companies money in the short term – but at what cost, and will these cuts come back to haunt them?

Predictably, harmful content thrived last year. On X, the platform formerly known as Twitter, hate speech and propaganda increased, and a verification system that helped users to more easily identify trustworthy accounts was discarded in favor of one where anyone willing to pay a pittance can obtain a coveted blue checkmark. On Facebook and Instagram, Russian disinformation campaigns continued alongside the ongoing invasion of Ukraine, and Instagram’s recommendation engine helped connect and promote vast networks of accounts belonging to pedophiles consuming and distributing child sexual imagery and videos. While these are some specific examples, they are not isolated ones.

Regulators around the world, who have imposed billions in fines for previous trust and safety failures, are alarmed by perceived backsliding by social media companies. The US Federal Trade Commission, which fined Meta $5 billion for failures to protect user privacy in a 2020 settlement, is alleging Meta is putting children at risk through new violations of the terms of that settlement . The European Union, which recently fined Meta another $1.3 billion for related violations has launched an investigation into X for its “failure to counter illegal content and disinformation" under the Digital Services Act and Digital Markets Act. X, or any other company, is deemed noncompliant with these acts, may face penalties up to 6% of the company's total global revenue or suspending them from operating in the EU. Similarly, Australia’s eSafety commissioner issued a fine to X for failing to disclose information regarding child abuse content on the platform, and sent a legal memo warning Google, TikTok, Discord, and Twitch that they needed to ensure compliance with the Online Safety Act to avoid joining X in facing civil penalties. Beyond these regulatory investigations social media companies are facing a wave of civil lawsuits. 42 State Attorneys General allege that Meta violated the Children’s Online Privacy Protection Act, and traumatized victims of the mass shooting in Buffalo last May are suing YouTube and Reddit for radicalizing the shooter. Some social media companies, such as the random chat app Omegle, have been effectively sued out of existence by civil litigation from users who were harmed on the platform.

Platforms not sufficiently addressing harmful content also become risky places for advertisers, who drive 90% or more of revenue at the social media companies. On X, advertising revenue has decreased 55% or more since it was acquired and its trust and safety experts were laid off. More recently, as its owner Elon Musk has disseminated debunked conspiracy theories, like Pizzagate, and seemingly endorsed antisemitic conspiracies, many of the platform’s largest advertisers stopped advertising on X. Likewise, on Instagram and Facebook, advertisements encouraging people to visit Disneyland, buy erectile dysfunction medication, and use the dating apps Match and Tinder appear in between short-form videos sexualizing children. Since this revelation, the Match group, along with other advertisers, has stopped promoting its brands on Meta’s products, which is a direct hit to the company’s main source of revenue.

But there is an even worse threat looming for these companies – losing users and their attention to sell to advertisers. If products generate enough bad experiences and harm people enough, people will seek less noxious alternatives. Recent polls reveal that of the largest social media platforms, Facebook and X consistently have the highest rates of users reporting negative experiences. In fact, nearly 1 in 3 users report seeing content that they thought was bad for the world in the previous 28 days. Moreover, the majority of users state that this content is likely to increase hate, fear, and/or anger between groups of people, misinform people, and fuel greater political polarization. Additionally, most US adults who use these platforms report feeling annoyed by their negative experiences.

These platforms track user sentiment, and they would be aware of users’ ever growing negative sentiment – or they would have been before cutting staff responsible for improving users’ experiences. In a series of experiments conducted by staffers at Meta, where the company’s researchers withheld algorithmic protections from harmful content for a percentage of users over the span of at least two years, they found that many users began to disengage and even quit the platform altogether. In contrast, the users who received the strongest algorithmic protections from harmful content actually started to engage more over time as their experience improved. Logically then, companies seeking to build long-term value should take actions that minimize harmful experiences, even if that means short-term fleeting decreases in user engagement. However, public documents reveal that Meta often resists launching interventions that protect its users if it affects their short-term engagement, which may explain why Facebook stopped growing in the US in recent years.

Similarly, X, which has most aggressively slashed protections this year, has experienced the fastest declines in user activity. One Pew Research study revealed that a majority of US Twitter users have taken a break or left the platform in the last year. Further supporting this is global web traffic to X decreasing 14% globally, and 19% in the US year-over-over. Perhaps most strikingly, X CEO Linda Yaccarino seemed to confirm this in her remarks at the Code Conference where she admitted declining active users.

Short term cost cutting can be very expensive. Billion-dollar regulatory and legal fines make headlines and dent company coffers. An advertiser exodus can create sudden, crippling revenue drops. But fleeing users – brands, influencers, and consumers – threatens irrelevance and extinction for the social media platforms we all use today. Today's giant brands – such as Facebook, Instagram, and X -- may seem too big to fail. But young hungry alternatives are springing up all over, though no clear favorites have yet emerged to shoulder the giants aside. But in technology the only constant is change. Just ask MySpace or Yahoo.

Authors

Matt Motyl
Matt Motyl is a Resident Fellow of Research and Policy at the Integrity Institute and Senior Advisor to the Psychology of Technology Institute at the University of Southern California’s Neely Center for Ethical Leadership and Decision-Making. Before joining the Integrity Institute and the Neely Cent...
Glenn Ellingson
Glenn Ellingson is a technologist specializing in the safety of online platforms. As a Visiting Fellow at the Integrity Institute he supports online platforms’ global election integrity efforts. Glenn has led teams at Facebook and Instagram working on digital literacy, voter suppression, civic haras...

Topics