Home

Donate
Perspective

What AI Companies Can Learn From Social Media’s Tribulations

Paolo Carozza, Suzanne Nossel / Dec 12, 2025

Paolo Carozza is Co-Chair of the Oversight Board and a professor of law at the University of Notre Dame. Suzanne Nossel is a member of the Oversight Board and the Lester Crown Senior Fellow on US Foreign Policy and International Order at the Chicago Council.

Composite image of OpenAI cofounder and CEO Sam Altman (left) and Meta founder and CEO Mark Zuckerberg (right). Images via Wikimedia Commons

Prompts, pop-ups and nosy digital assistants are a constant reminder that artificial intelligence is reshaping our lives. The public, understandably, is leery. Fresh headlines allege that AI can nudge troubled teens toward suicide, quash intimacy, defame, impede education and facilitate cyberattacks. Now, United States President Donald Trump has issued an order to override state legislation in favor of a national approach that has not yet been specified and that seems unlikely to move forward soon in Congress.

This is not the first time we’ve felt clanking chains haul us up the hill of a technological rollercoaster. Over two decades of social media revolution, powerful platforms have wrought consequences great and terrible. Facebook, Twitter, Instagram and TikTok have expanded the meaning of friendship, enabled mass movements and democratized the public square. They have also fueled political polarization, disinformation, harassment, self-harm, alienation and violence.

On social media we learned the hard way how to better protect children, contain disinformation and safeguard free speech. Scandals — election influence operations, mental health crises and social media-fueled genocide — forced companies to take safety and integrity more seriously.

When it comes to AI, we need to take to heart the hard-won learnings of the social media era, or be doomed to repeat mistakes with graver consequences.

Nowadays, on Meta, Google and other mainstream platforms, automated systems remove content that incites violence, deceives users, promotes bigotry, or violates privacy with improved accuracy. European regulations require companies to explain how algorithms recommend content, clearly label political ads and reveal what posts get taken down and why.

Containing the harms of social media remains, at best, a work in progress. But years of effort spotlight crucial lessons for the private and public regulation of AI. The concept of content moderation, which has always seemed a feeble brake on the runaway train of social media, has gradually shown the potential to slow dangers while still protecting freewheeling discourse. Even Elon Musk, a self-proclaimed free speech absolutist, has continued moderation on X, removing child exploitation, harassment and more. While Meta’s Mark Zuckerberg has scaled back moderation on topics like immigration and transgender identity, the company still adheres to roughly 80 pages of publicly available “community standards” governing nudity, bullying, frauds and scams, terror and much more.

Content moderation is messy, and platforms are ambivalent about it, preferring as few restraints as they can permit. Tech whistleblowers have exposed cut corners and hypocrisies that prioritize profits over protection. After Meta admitted it was used to incite violence against Myanmar’s Rohingya population and that it carried shadowy foreign campaigns intended to sway crucial elections, the company wanted out of the hotseat, creating an independent oversight board (of which we are members) to reconcile its corporate commitment to free speech with the desire to project other values such as human dignity, authenticity, safety and privacy.

The lessons of moderating social media content can help forestall some AI harms. Large language models (LLMs) prompt free expression quandaries similar to those raised by social media. Character.AI is invoking the First Amendment to defend against legal claims by the family of a 14-year-old who committed suicide hoping to unite in the afterlife with a beloved chatbot. Similar wrongful death-related suits now seem to make news every week. A solar contractor in Minnesota is suing OpenAI for threatening his business with false reports on accusations of deceptive sales practices.

Free speech issues on social media and in LLMs are not identical. While social media content moderation deals mainly with what users may post, LLM disputes center on information provided by the platform to the user. Chatbots cite sources and varied perspectives, but also render conclusions that can be offensive, false or defamatory, and even prompt harmful offline actions including violence or self-injury. In the United States, social media platforms are generally shielded from legal liability for user-generated posts under Section 230. Legal experts largely agree that LLMs will enjoy no similar protection, meaning that irresponsible policies could elicit paralyzing legal claims.

Both social media platforms and LLMs prompt user concerns and claims over content that is wrong, hurtful or harmful. LLMs need to allow users to object to content they believe is false, defamatory, hateful or harmful, receive a reply and be able to reach a human in cases of severe harm. Amid humanitarian disasters or violent conflicts, human expertise must be surged to ensure that false or misleading information does not fuel deadly consequences. When problematic content is removed for legitimate reasons and with due process, the same information must not then be regenerated and disseminated by incorrigible bots.

For now, the content policies of LLMs are paper-thin. Meta AI’s published “use policy” is just over three pages, contrasted with its voluminous social media community standards. OpenAI’s guidelines are just over 1,000 words. Desperate though they may be to avoid the headaches of social media moderation, provisions like Meta’s bans on “impersonation” and “disinformation” by its LLMs will run right into the same questions of interpretation and line-drawing that demand careful consideration on social media.

Like social media platforms, LLMs are global. Developing and implementing worldwide policies demands local political and cultural awareness as well as cross-border consistency.

Meta’s Oversight Board relies on codified international human rights law to decide when to allow limits on expression. It does so in order to prevent serious and imminent real-world harms, including incitement to violence. This universal approach is equally applicable to content generated by LLMs.

Enforcing lucid rules about specific types of problematic speech is necessary but insufficient. The vast influence of social media and LLMs demand that companies foresee, disclose and act on anticipated risks, a duty enshrined in EU regulation. Social media can abet foreign disinformation, evangelize eating disorders, and fuel terrorist movements. Chatbots can do this and more, replacing human intimacy with loveless automation and burying traditional cues of authenticity that underpin trust.

OpenAI CEO Sam Altman has been excoriated for a seemingly cavalier approach to mental health, insisting that sycophantic bots and freely available erotica for adults are nothing to worry about. Social media companies came to learn that empty assurances on user safety by Silicon Valley executives can come back to bite. They came to rely on independent experts, civil society bodies and researchers to inform policies, give feedback and raise red flags. On matters like privacy, terrorist content or COVID-related disinformation, companies like Meta have drawn heavily on outside expertise. Such consultations cannot be window dressing.

When Musk bought Twitter, he announced with fanfare a diverse content moderation body, only to dissolve its preexisting council two months later. OpenAI has launched a new “non-binding” mental health council. But while the company’s products are available in over 150 countries, the council’s members are all based in North America, save a single Briton.

Credible engagement with experts involves broad representation, substantial financial investment, transparent data sharing, safeguards for independence and resolve to take counsel seriously despite inevitable commercial costs and pressures. For all its ongoing challenges and missteps, Meta deserves credit for being the only social media company willing to take the risk of submitting itself to a meaningful measure of external oversight.

Bette Davis’s character in All About Eve famously announced, “Fasten your seatbelts. It’s going to be a bumpy night.” But online safety in the age of AI is not just a rote matter of clipping in a seatbelt. It means having constraints in place that have been tried, tested and can withstand the force of a crash. Designing safety measures fit for the velocity and destructive potential of AI will require every ounce of experience we’ve gleaned on the harms of digital content, and then some.

Authors

Paolo Carozza
Paolo Carozza is Professor of Law and Concurrent Professor of Political Science at the University of Notre Dame. His numerous scholarly books and articles on comparative constitutional law and human rights law have been published in multiple languages. His current research interests focus on the imp...
Suzanne Nossel
Suzanne Nossel is an expert and analyst on issues including human rights, multilateral diplomacy, and US foreign policy. She is the author of Dare to Speak: Defending Free Speech for All (Harper Collins) and a co-author of Is Free Speech Under Threat (Penguin Random House UK). She previously served ...

Related

Perspective
What the AI Safety Debate Can Learn from the TechlashNovember 6, 2025
Podcast
Considering Trust and Safety's Past, Present, and FutureNovember 30, 2025

Topics