Home

Section 230 Reform Naysayers Ignore Clear Problems Online-and the Clear Solutions

Neil Fried, Rick Lane, Gretchen Peters / Oct 13, 2021

Skeptics of reforming Section 230 of the Communications Act, which limits platform liability, routinely diminish the unlawful and harmful conduct that online platforms facilitate through their own irresponsible behavior, as well as constitutional proposals that can help address this problem.

Take, for example, Jeff Kosseff and Daphne Keller’s Oct. 9 Washington Post Perspective, “Why outlawing harmful social media content would face an uphill legal battle.” In it, the authors focus on the “misinformation, toxicity, and violent content” that social media amplify. They point out that algorithmic amplification of awful but lawful speech is protected by the First Amendment, making many proposed legislative responses potentially unconstitutional.

This sidesteps, however, not only the platform carelessness highlighted in the recent series of four Senate hearings on protecting consumers and kids, but also the constitutional approach that Professor Danielle Citron and we have each put forward to address it: amending Section 230 so that platforms cannot invoke the liability shield unless they take reasonable steps to curb unlawful conduct on their services.

Ordinarily, businesses have a duty of care to protect one customer from harming another customer or the public. A hotel can be held civilly liable if it doesn’t do enough to limit prostitution on its premises. A nightclub can be held civilly liable if it doesn’t do enough to limit drug trafficking on its dance floor. A pawn shop can be held civilly liable if it doesn’t do enough to limit fencing in its store.

These and many other situations have analogs in the online world. But a 1997 court interpretation of Section 230 granting platforms overbroad immunity for their irresponsible behavior has had the effect of preventing application of the duty of reasonable care in such situations. That decision further enables the platforms’ “move fast and break things” culture, to borrow a phrase from Mark Zuckerberg.

As more of our social, economic, and political lives have moved online, this dereliction of the rule of law makes the public less safe and removes judicial recourse. Adding insult to injury, it gives online platforms an inappropriate competitive advantage over their brick-and-mortar rivals, which rightfully must expend resources to ensure their own behavior does not facilitate illegal or harmful activity.

Restoring the duty of care for online platforms, as we suggest, does not require repeal of Section 230. Nor does it involve government restriction of lawful speech. It simply gives victims access to the courthouse steps when a platform irresponsibly facilitates unlawful or harmful conduct. The victims still must prove their cases, but at least they can be heard.

The reasonableness standard has been developed over more than 100 years of judicial precedent that courts, victims, and platforms can rely on. It provides a mechanism that can account for platform size and the amount of harm, so that smaller platforms and startups are not treated as if they are Facebook or YouTube. And it can adjust as online problems and potential solutions evolve. If the platforms and their defenders are worried about abusive litigation, they should join the tort reform movement, not defend a distortive, harmful, and unjust carve-out for social media.

There is also a constitutional way to address awful but lawful misinformation, toxicity, and violent content on social media—as well as platforms’ erratic and opaque content moderation practices: transparency requirements.

Congress cannot require or prohibit platforms to take down or leave up lawful speech. The First Amendment leaves those decisions to the platforms’ discretion.

But the Supreme Court has held that the First Amendment does allow the government to require that commercial enterprises provide “purely factual and uncontroversial information about the terms under which [their] services will be available,” where the “disclosure requirements are reasonably related to the State’s interest in preventing deception of consumers.”

Congress could adopt transparency requirements that require platforms to: 1) publicly disclose their content moderation policies; 2) create a process by which users can file a complaint with the platform arguing it did not follow its own policies; 3) create a process by which users can appeal a platform’s decision to take down or leave up specific content, or to terminate or not terminate service to a user; and 4) publicly disclose, subject to certain privacy protections, information about the decisions the platform has made to take down or leave up certain content, or to terminate or not terminate service to a user.

Platforms that violate these transparency requirements or their own policies would lose the Section 230 shield and might be culpable for breach of contract or a deceptive trade practice. That would give users a venue when the platforms moderate in an inconsistent way.

These transparency requirements would also better enable individuals and businesses to decide what platforms to use—potentially prompting new entrants and existing providers to compete based on content moderation practices, promoting innovation.

In addition, the public disclosure requirements would allow policymakers, law enforcement, and researchers to track problematic trends—either with users’ online misbehavior or the platforms’ moderation practices—and develop strategies to address them.

Focusing on platforms’ careless facilitation of unlawful or harmful conduct, along with these two constitutional approaches, would allow Congress to advance a freer, safer, more transparent internet. The platforms shift focus to lawful but awful speech because that problem is harder to solve. Entertaining that misdirection only benefits tech firms, the central beneficiaries of the status quo.

Authors

Neil Fried
Neil Fried launched DigitalFrontiers Advocacy in January 2020, bringing more than 25 years of experience in the public and private sectors, and testified before Congress on section 230 reform in June of that year. From 2013 to 2020, Neil was senior vice president for congressional and regulatory aff...
Rick Lane
Rick Lane is a tech policy expert, child safety advocate, and the founder and CEO of Iggy Ventures. Iggy advises and invests in companies and projects that can have a positive social impact. Prior to starting Iggy, Rick was the Senior Vice President of Government Affairs of 21st Century Fox/News Cor...
Gretchen Peters
Gretchen Peters is Executive Director of the Alliance to Counter Crime Online. She also conducts complex research and investigations into organized crime, fraud and corruption.

Topics