Learning from the Past to Shape the Future of Digital Trust and Safety
David Sullivan / Apr 3, 2023David Sullivan is the founding Executive Director of the Digital Trust & Safety Partnership, which is made up of technology companies committed to developing industry best practices to ensure consumer safety and trust when using digital services.
From “puffer jacket Pope” deepfakes to rapidly proliferating age verification requirements for social media, public interest in online safety is at an all-time high. Across the United States and around the world, not a day goes by without some news of a powerful new digital technology, concern about how that technology could be used for abuse, and accompanying calls for regulation.
This surge of interest in safety is a good thing. With 66 percent of the world’s population using the internet, most of the planet has a stake in how digital services manage safety risks. At the same time, with so many new entrants joining this discussion, we risk forgetting the lessons learned from debates that have been raging since the internet’s inception.
The importance of learning from the past was on display recently at the South by Southwest conference in Austin, Texas, where on a panel on the future of content moderation, we spent most of our time talking about the history of trust and safety over several decades.
Since that discussion, several lessons became apparent about the evolution of online trust and safety mapped across four distinct eras.
1. Community moderation on the pre-commercial internet
In the beginning, there was the primordial pre-commercial internet. This was a world of bulletin boards and mailing lists, where researchers and hobbyists contended with questions of acceptable online conduct on an artisanal scale. From the first attempts to deal with spam emails to the articulation of Godwin’s Law, these early efforts still resonate across the contemporary internet. In particular, they show the importance of putting people in charge of tending to their own online communities through self-governance, as shown today across subreddits and wikipedia entries.
2. The rise of user-generated content and professional moderation
Next came the advent of large-scale user-generated content, enabled in part by enactment of Section 230 of the Communications Act in 1996. Frequently misunderstood in public debate, Section 230 not only made speakers responsible for their own online content, it provided the legal certainty that allowed websites to build out the teams and tools responsible for creating and enforcing content policies. The discipline of trust and safety was born in the early 2000s at companies like eBay and Yahoo, and then grew into the massive, global phenomenon of commercial content moderation with the rise of social media thereafter.
These developments took place inside companies, with little engagement between trust and safety teams and outside voices from user communities or civil society, whose focus at the time was largely on how governments could compel companies to remove content.
But the creation of a field of professionals dedicated to preventing the misuse of online services is often overlooked in today’s debates. We must preserve and promote the knowledge of the individuals who have been working on these matters for decades.
3. Demand for, followed by supply of, content moderation transparency
The subsequent era took hold as users of social media and other digital services began to realize the importance of company content moderation, independent of government action. It reached a fever pitch in the mid 2010s, as activists—from Arab Spring revolutionaries to drag performers protesting real-name requirements in the United States—realized that company content moderation, independent of government action, had a significant impact on their speech.
In 2018, the increasing demands from activists and policymakers alike for more information about terms of service and community guidelines led to a first wave of content moderation transparency measures. At the Content Moderation at Scale conferences, trust and safety teams spoke publicly about their work. And companies like YouTube and Facebook rolled out the first reports about enforcement of their community guidelines.
Demands for transparency didn’t end with the publication of these reports, but instead created a constant back and forth between companies and their stakeholders. Today, the focus is less on publishing particular numbers, and more on what kinds of transparency are meaningful and valuable to both expert and general audiences.
4. Institutional evolution toward maturity
Finally, in recent years we’ve shifted into yet another era, one of new institutions for managing the complexity of online content and conduct. The Trust & Safety Professional Association grew out of the Content Moderation at Scale conferences first hosted in 2018, providing a professional society for individuals working at all levels on trust and safety. Our organization, the Digital Trust & Safety Partnership, has set out organizational commitments to best practices in the field embraced by many of the world’s leading tech companies. Other new organizations, from the Oversight Board to the Integrity Institute, have similarly developed to provide new levels of transparency and accountability across the digital sphere.
These new initiatives will seek to co-exist with a wave of global content regulations, from the EU’s Digital Services Act to online safety regulations in Australia, Singapore, the UK, and many other jurisdictions.
Although it is too soon to definitively state lessons learned from this latest era of institutional development, a few things are clear.
First, we need to constantly remind each other of these historical lessons. New developments like generative AI may be technology game changers, but they do not undo the importance of the development of community, professionalism, transparency, and maturity over the past few decades.
Second, marginalized communities are always the first folks to learn how products and policies can be misused for abuse. Today, we must continuously improve on ways to meaningfully bring these perspectives into the development of new products and policies.
Third, policymakers have been quick to call the internet a “wild west” as they promote new content rules. But even a cursory review of recent history shows that online spaces have been far from unregulated. Policymakers must learn from this history if they want to deliver on promises to make the internet safer and more trustworthy.
Even with increasingly frequent jaw-dropping advances in technology, we can’t predict the future. But whatever digital safety risks lie over the horizon, we’ll be better situated to handle them if we learn from a few hard-won lessons from the past.