Emma Llansó is the Director of the Center for Democracy & Technology’s Free Expression Project.
There’s a lot of frustration about social media content moderation these days. Many people point to the hate, harassment, and disinformation available online and argue that online services need to do a lot more to address abusive content. Others look at the political advocacy, art, and educational content that gets erroneously taken down and argue that companies need to change their policies and improve their processes to avoid such overbroad mistakes. (A lot of us think both are true.)
There’s no one-size-fits-all approach to content moderation that will work for every service, and getting closer to good outcomes requires a lot of thoughtful, nuanced work. Unfortunately, the issue of content moderation has also become highly politicized in the US, with lawmakers entrenched in partisan positions on how online services should or should not moderate user generated content.
Enter a Texas social media law, which embodies many of the concepts and legal theories favored by the right. The Texas law prohibits social media platforms from engaging in what it terms “censorship”: blocking, removing, “de-boost[ing]”, demonetizing, or otherwise restricting content based on viewpoint, including viewpoints expressed off-platform.
The stated aim of the bill is to address alleged “bias” in content moderation, but its primary practical effect would be to stymie efforts by social media platforms to reduce the amount of abusive content available on their services. For example, sites are currently free to ban content promoting terrorism or racism, while allowing news posts or research reporting on the drivers and effects of terrorism and racism. They can also, say, block users from bullying or encouraging people to commit suicide, but allow posts explaining where to get help for people contemplating suicide.
That’s why the Center for Democracy & Technology (CDT) joined other digital rights and civil liberties groups last week to file an amicus brief asking the U.S. Supreme Court to prevent the reckless Texas social media law from going into effect. The coalition includes CDT, the Electronic Frontier Foundation, the National Coalition Against Censorship, R Street Institute, the Woodhull Freedom Foundation, and the Wikimedia Foundation.
Under the Texas law, social media platforms that make judgment calls about the content they do and don’t want to host will face effectively endless litigation accusing them of “bias”. The options for for platforms aren’t great: either ban entire categories of content, provide equal time for racist and anti-racist speech, or take their chances with the risk of litigation. In any case, users and the public lose out.
In our brief, we explained to the Court that if the Texas law goes into effect, the public interest will be harmed in at least three ways. First, it will force companies to end or alter their content moderation practices that can be construed as viewpoint based, tying their hands against fighting abusive content. Second, the risk of litigation will discourage some platforms from engaging in any content moderation at all, even under policies they believe are viewpoint-neutral, leaving users to wade through massive amounts of unwanted content.
And third, other platforms might instead remove even more user speech in an effort to appear more consistent in the enforcement of their content policies. This also means a lot more content filtering, with the dial turned to 11 to avoid appearing underinclusive. That would inevitably lead to over blocking of lawful speech and would create technical barriers to discussing important, controversial issues such as sex workers’ rights, abortion access, and experiences with racism.
The fact is, even with the status quo being far from perfect, Internet users benefit from the availability of social media services that can engage in content moderation, unfettered by laws that limit their discretion. Different services have different structures and functions—compare tweets to a Twitch livestream and its adjacent chat, or a hyper-specific subreddit to a Tiktok feed—and these affect the kinds of content moderation tools, policies, and priorities a service will have. Services use their content policies to convey the kind of community and user base they’re hoping to foster, and users and advocates can in turn push companies to change and strengthen their policies to address the issues they face.
What’s more, most of the content that social media platforms moderate under their terms of service is lawful speech protected by the First Amendment. Congress, or state legislatures, wouldn’t be able to require social media platforms to remove most of the hateful, gory, misleading, or just plain annoying content that can make using an online service a terrible experience. But online services can take action against that kind of dross, and we depend on them to do so to make their services usable.
Ultimately, the Texas law should be struck down on First Amendment grounds. Every court to consider the question has found that social media platforms have a First Amendment right to edit and curate the content they publish on their sites. That includes the Eleventh Circuit, which just issued its opinion in the constitutional challenge to a similar social media law in Florida. That court found, “We hold that it is substantially likely that social-media companies—even the biggest ones—are ‘private actors’ whose rights the First Amendment protects, that their so-called ‘content-moderation’ decisions constitute protected exercises of editorial judgment, and that the provisions of the new Florida law that restrict large platforms’ ability to engage in content moderation unconstitutionally burden that prerogative.”
The Texas law likewise directly impinges on that right, and we hope that courts will ultimately come to the same conclusion there. But before the courts can consider the merits of this case, we need the Supreme Court to preserve the status quo and keep the Texas law from going into effect and throwing social media content moderation into disarray.
Emma Llansó is the Director of the Center for Democracy & Technology’s Free Expression Project, where she works to promote law and policy that support Internet users’ free expression rights in the United States, Europe, and around the world. Emma leads CDT’s work focused on protecting fundamental rights to freedom of expression and preserving strong intermediary liability protections as a core element of legal frameworks that support free expression online. Emma was deeply involved in the development of the Santa Clara Principles on Transparency and Accountability in Content Moderation, is a member of the Freedom Online Coalition Advisory Network, the Christchurch Call Advisory Network, and has served on the Board of the Global Network Initiative. Emma also represents CDT on the Twitch Safety Advisory Council and the Twitter Trust & Safety Council. She earned a B.A. in anthropology from the University of Delaware and a J.D. from Yale Law School, and is a member of the New York State Bar. Emma joined CDT in 2009.