Home

Donate
Podcast

Considering Trust and Safety's Past, Present, and Future

Dean Jackson / Nov 30, 2025

Audio of this conversation is available via your favorite podcast service.

The past few years have featured a great deal of introspection about a professional field which has come to be known as 'trust and safety,' comprised of the people who develop, oversee, and enforce social media policies and community guidelines. Many scholars and advocates describe it as having reached a turning point, mostly for the worst.

Joining me to discuss the evolution of trust and safety—not coincidentally, the title of their forthcoming article in the Emory Law Journal—are professors of law Danielle Keats Citron and Ari Ezra Waldman. Also joining the conversation is Jeff Allen, the chief research officer at the Integrity Institute, a nonprofit whose membership is composed of trust and safety industry professionals.

What follows is a lightly edited transcript of the discussion.

Dean Jackson:

The past few years have featured a great deal of introspection about a professional field which has come to be known as trust and safety, comprised of the people who develop, oversee, and enforce social media policies and community guidelines. Many scholars and advocates describe it as having reached a turning point, mostly for the worst. Here to discuss the evolution of trust and safety, not coincidentally, the title of their forthcoming article in the Emory Law Journal are professors of law, Danielle Keats Citron and Ari Ezra Waldman. We are also joined by Jeff Allen, who's the chief research officer at the Integrity Institute, a nonprofit whose membership is composed to industry professionals. Thank you all for being here.

Danielle Keats Citron:

Thank you so much.

Jeff Allen:

Thank you for the invitation.

Ari Ezra Waldman:

Thank you so much.

Dean Jackson:

If I can take the prerogative to flash us back to the 1990s, after reading your paper I revisited an old essay by Ethan Zuckerman at UMass Amherst on his time developing trust and safety at the web hosting company Tripod, which was a competitor to GeoCities, which, saying those words out loud now makes me sound a little bit like an archeologist. But he describes how those companies encountered novel legal questions as they were thinking about their own trust and safety policies, and how they often responded to those through the path of least resistance, by building tools on the fly and making ad hoc decisions that really only later became codified into norms and policies. And your account of trust and safety on social media struck me as very similar, even though it takes a later starting point. So I wondered, to start out, if you could tell us, just how did the field evolve during its early years?

Danielle Keats Citron:

Well, I first started writing about cyber stalking in 2007, and talking about cyber mobs. And at the time, industry folk... well, really nascent. These nascent tools like Twitter and then Facebook. Folks would reach out to me and say, 'Hey, I'm the only one at...,' like Del Harvey, 'I'm the only one at Twitter dealing with spam and abuse, and I need some help because I understand what stalking and harassment feel like from the inside,' having been stalked herself, 'But I want to help the people I work for.' She was one of two. She was the only... Basically, she hired one other person to help her. Really, Twitter was focused on spam, impersonation, and copyright violations.

And she's like, 'I'm seeing a whole lot of abuse on this early platform.' It's 2009. Early 2009. "I hear you've written about cyber stalking in a piece called Cyber Civil Rights. 'I don't know many law professors. I'm not a lawyer,' Del says to me, 'But can you help me figure out what I need to say to people in charge to show, A, what online stalking looks like and is, and how harmful it is? And can you help me?'

And I was like, "Absolutely." It wasn't like no one had ever thought of compensating anyone. I was like, "Of course not." I was like, "Let me help you. Of course. Couldn't be more meaningful." And so it was, that began some of my work with folks in trust and safety, which, they didn't understand themselves. They didn't call themselves trust and safety. Del was like, 'I deal with copyright, spam, and hate.' She didn't call herself really anything in that sense. And at the same time, once I talked to Del, then all these other people would reach out to me to say, 'I heard you helped Del Harvey. Could you help me?'

And the more that I wrote about online abuse and different forms of intimate privacy violations and cyber stalking more generally, the more people would reach out and say, "Will you help me? " Then that evolved into helping and advising Facebook, advising other companies to figure out, what do we mean by hate speech? That evolved over the time. And so it was largely a discussion of policy. What do we mean by terms that would be in policy statements and then dealt with X posts? Largely, after stuff happened. We had discussions about products, but it was in the early years. We weren't talking about design.

It was really like, 'How do we figure out when someone lets us know there's been abuse that is silencing them and scaring them, terrifying them, possibly ruining their lives? And how do we deal with this?' So I would help them define things, give examples. And in 2011, when I wrote a piece about hate speech and intermediaries and how they might, what we called them intermediaries, Helen Norton and I, how they might define hate speech, and why they should be transparent in their policies. Why they need to give examples. Why you go figure out which kind of definition you want or the harms you want to avoid. And that then led to much further conversations and work with companies beyond Facebook and Twitter. Does that make sense? So that was, like you said, 'What are the early years?' And what I can tell you is those early years involved discussions about policy, speech policies and practices, and that those ultimately led into conversational work about product design. But that evolved over time.

Ari Ezra Waldman:

The only thing I'll add about the evolution... that was absolutely perfect, Danielle, is that I want to disaggregate, a little bit, the two descriptors that you used in your question, Dean, which was you suggested that early on it was pretty ad hoc and companies took the path of least resistance. Those don't necessarily have to go together. Danielle described a world that was very ad hoc. Where someone like Del Harvey saw a problem and then had to figure out from scratch, had to deal with it.

And after people like Danielle, led by Danielle, really, and others who would engage with them, defining these terms and also creating these transparent policies. But at the same time, these platforms were growing. And when you had to start doing these things at scale, then you start getting more professionalized policies. So if the story that Danielle is telling of the beginning of trust and safety is very ad hoc, there was no such thing as trust and safety, and then all of a sudden you add in content moderation at scale and other problems that a more professionalized company will have, then you start getting policies, organizational charts, and reporting structures, and more defined rules that are now, instead of 1,000 words they're 25,000 words. And you get contract hirees, contract workers who are doing this kind of work and making these kind of decisions at the moment.

So over time, and we're really not talking that many years, over time it moves from an ad hoc process where it's one person with Danielle in a room, to conversations and then to systems. But the question of whether they enacted rules or followed rules that were characterized by the path of least resistance, that's an entirely different question that I hope we return to.

Dean Jackson:

Could you put a rough date from the beginning of this work in, say 2007, when you first engaged Twitter, Danielle, to when things started to become more formalized? Because your piece, after all, goes all the way to the present day. And so I'm wondering where in the timeline you've just landed us.

Danielle Keats Citron:

Okay. I would say, and I don't know, Jeff, if you're going to disagree but at least for my own, and this is just anecdotal. This is me, Mary Anne Franks, the Cyber Civil Rights Initiative which we established in 2013, but we long did work before we established CCRI, either individually or with Dr. Franks, working with companies on this kind of ad hoc in the sense of less-professionalized basis. That lasted, I would say from 2009 to about 2011 and '12. And during that time period, and I never forget moments like the leaking of Facebook's internal employee manual on hate speech. That I remember sitting in a room at Hogan Lovells with Kevin Bankston. And this leak had just happened. This is 2011. The leak of their terms of service. They were not transparent about all those policy discussions I was having with them. It was all just internal to the company.

They were trying. They were working hard on figuring out, what is stalking? What are threats? What does bullying mean? Bullying, different from stalking. All these things they were working on. They weren't telling the public because Section 230. It's really important to note that there's law that shields all of these platforms from responsibility. So they were making, and this goes back to Ari's really important point, is that it wasn't like they were shy about doing it. They were actually pretty proactive given the fact that they enjoyed legal immunity, especially when they were small and we weren't talking about big advertising dollars.

So these companies, I would say between 2009, 2011, '12 are writing policies, they're working on harmful activities and thinking about how they would deal with them. Sometimes more blunt than we have now, the kinds of tools, but nonetheless... and the pressure started arising when there would be leak from people, as Ari was talking about, increasingly of contract people executing these policies. And they sometimes would talk to the press and leak these policies. So it's such a different world where you go and you click on and you go to terms of service and community guidelines and you see these very long explanations.

They did not exist at all. Twitter might have said five words in the beginning, "We do not allow copyright violations, spam, impersonation, and CSAM." Period, the end. No other explanation. But internally, as Ari and I were explaining, they're working on it on the inside. Defining things, trying to be more systematic, growing into policies and systems that professionalize it, at least to a certain extent. And it's really like, at least from what I saw from the outside and inside, 2011, 2012, '13 we start to see not only leaking of policies, but more transparency, because there's pressure to have more transparency and people are writing about it. I'm so not the only law professor. There are tons of us now thinking and writing about content moderation, stalking, threats, harass, non-consensual intimate imagery, as I was doing with Dr. Franks.

And so I think it's in that time period. And just before Gamergate, just before The Fappening, where I had been watching women and minorities, sexual and gender minorities being stalked, harassed, threatened on Facebook and on Twitter and other sites in an awful, persistent way. And they started to work on it because they wanted to work on it, but it wasn't clear to the public they were doing it.

And it was really only in 2014. And I think, and Ari, you can help me here, but it's only like in 2014 where we get high profile. Abuse of high profile people, women in gaming. Zoe Quinn, Brianna Wu, other wonderful humans that I know, and then very high profile celebrities that catch the media's attention and then force a PR, in some sense, reckoning, and an advertising reckoning. So it wasn't just... remember, law shields these people from responsibility. All these platforms. But advertisers, my own experience with these companies, started saying, "You know what? It's not cute to have Toyota ads run against rape threats targeted against Brianna Wu and doxing her home address, et cetera, and calls to rape her. I don't think I'm into that."

And there were other advertisers who said the same. And you saw. And then the AG of California, Kamala Harris got involved with non-consensual intimate imagery. And it's around 2014 that we see far more funds, resources dedicated to content moderation. Far more formalized policies that they then post and share online. And that has a lot to do with advertising pressure. I always like to think they did it for the goodness of their heart. That I was and Marianne Franks, we were so convincing about the harms. But I think, in truth, of course it was market pressures that actually were brought to the fore with our pressure that helped. But it was really Jennifer Lawrence, God bless her, writing in Vanity Fair, "The posting of my nude photo is a sex crime." Right?

Dean Jackson:

Mm-hmm.

Danielle Keats Citron:

And it's Brianna Wu talking to the press. The Boston Globe, Washington Post. Zoe Quinn, Anita Sarkeesian, who'd been tormented for years, a women in gaming, and writing about the gendered misogyny in gaming. I think it's then we had a reckoning of professionalization.

Dean Jackson:

I remember being on a city bus in DC, checking my Twitter. And I followed a fair number of the gaming press, just as a hobbyist. And it was like my feed had been invaded. I had no idea, no context for what was happening until it started to get news coverage. But I saw it happen in real time and never would've predicted the continuing shadow that those trends cast. And I'm glad that you've raised them in the context of this conversation and the origins of this work. But I want to make sure we get Jeff in here because, Jeff, you were a data scientist at Meta. Correct me if I've got your dates wrong, but from 2016 to '19, which puts you a little after the period that Danielle just described. But I'm wondering about your reaction to the way they've characterized the period of trust and safety that maybe came right before your entry into that field. But also in general, the way the piece struck you. I'd love to get your perspective.

Jeff Allen:

Trust and safety is such an interesting field because it is in part so old but at the same time so young. And literally, the term of trust and safety dates back to like 2000. And I want to say eBay was the company where trust and safety was first coined. And it makes sense that eBay would be one of the earlier companies worrying about this because people didn't want to buy a product and then never have it shipped to them on eBay. That's a trust issue. Sale of illegal goods is something that eBay had to be concerned about. There's a safety issue. And so it sort of feels natural that eBay would be an early thinking in this space. But I do think that they hit a lot of the right buckets. Danielle and Ari, you get it mostly right, from my personal experience, where from the outside it probably feels very reactionary.

But usually what's happening is that there are people internally that care about a particular issue, and they can't get attention within the company until something external happens. And then once the external thing happens is when the company actually takes movements on it. And so I think a lot of these issues that we've seen have their precursor in a couple of employees who are like, "Hey, this is a problem. We should deal with it. " And then sort of advocating for that internally until, finally, there's an opportune moment in the public when they can be like, "Hey, everyone. Do you remember that thing that we've been yelling about for a year? Here it is getting us negative press out in the public. Can we please do something about it now?"

Like I said, it feels very reactionary from the outside. And it's easy, I think, to be cynical from the outside and be like, "Ah. The companies are only doing something when they're getting negative press or when advertisers are complaining or something like that." And I could definitely understand that sentiment, but there is some foresight that goes into it and there are people that are proactively thinking about it. I like to point to the first trust and safety or integrity thinking. First example of that in the public was actually Larry Page and Sergey Brin in the Page Rank article from 1997 or '98 or whenever.

And they talk about, in the future, being number one, ranking number one on a search engine is going to be really, really valuable. And so if you're running a search engine, you really have to worry about bad actors who will try to figure out how to manipulate the system to get to that number one spot, because that will have a lot of value from a marketing standpoint. And they were concerned about it from a marketing standpoint, which I think is a lot of the activity, but it's certainly not the only one. And so they're talking about that in 1998.

Fast-forward to 2005 when one of the first search engine integrity issues comes about, political ones at least, which was the miserable failure of Google [inaudible 00:17:37]. And that's where a bunch of bloggers came together and they took the phrase miserable failure as their anchor text and they linked it to George W. Bush's biography on the whitehouse.gov website. So if you Googled miserable failure, the number one link was George W. Bush's biography on the White House, which is kind of one of the first examples of a political influence operation campaign being run online, by some definitions. And Paige and Brin predicted that kind of stuff, to a certain example.

I think another space to really consider about is ads and ad integrity because, while 230 broadly protected them from any liability around the organic content that was being uploaded on their platforms and they were distributing, they were not protected around ads. And especially all around the world, there is tons and tons of regulation around ads and what kind of ads are and aren't allowed. And so ads has been another place where trust and safety has been kind of like a longstanding thing. We just didn't call it trust and safety. We just didn't call it by the professional name, but it was sort of like, "Oh, yeah. Ad compliance team." And it's their job to make sure that there are no firearms sales in the UK. There's no prescription drug sales in the EU.

There's tons of laws around the world that the platforms have to comply with, and so they did build up that infrastructure quite early. But then I think from Danielle, from the 2011, 2012 era onwards, the thing that's getting, really, attention is societal-level impacts. There was the Arab Spring, which was a mix of really positive stories and really negative stories. And that was around 2010 to 2012. There was ISIS and terror groups using the platforms, which takes you into 2014. Then you're into Malaysia, Cambridge Analytica, the Russian IRA operation, 2016. And then you're off to the races, where it really feels, from the outside, like the platforms are on the back foot. And it's just like new exploit, new vulnerability, new set of bad actors that are using the platform to do something bad after another, and the platforms responding to it.

And my personal story at Facebook, yeah, starting in 2016, ending on 2019. My integrity time started late 2017, early 2018 is when I was on integrity projects. And coming out of Malaysia, the IRA scandal, and Cambridge Analytica, there was a beautiful moment inside of Facebook when they were like, "You know what, everyone? We're going to take this really seriously. Every product team, you are all going to spin up your own integrity versions of it."

So my first integrity team was the Page's integrity team, which was based off the Pages Org. And they're like, "Hey, the IRA just used the Pages product to reach over 100 million Americans. We need to seriously consider. What are some bad outcomes that might be happening on Pages?" And the company really took it seriously. It was honestly kind of amazing to watch as every different product teams spun up integrity teams that were worried about any negative impacts coming from them, and taking that seriously. And yeah, it was great. I think it was great for a while. I think we're seeing a... I don't think that trust and safety is dead but we are entering a new era, for sure.

Danielle Keats Citron:

Can I just intervene just to, I think, add a little color? Because we're not going to disagree, Jeff, about the people who really moved the needle, who really cared. Whether that was Yonatan Zunger and Leah Kershner at Google, whether it was... these were the people I worked with really early on. Whether it was Dave Willner at Facebook. Whether it was Del Harvey, and then ultimately Vijaya Gadde at Twitter. They made a difference. Or Sarah Hoyle at Twitter. 2014. But to be clear, yes, Google, they know, as a marketing matter, trust and safety in their Page Rank article.

But I have to say, even though I gave my book, Hate Crimes in Cyberspace, at Google to a really big audience, 2014. It's about cyber stalking. And I had internal advocates for addressing non-consensual pornography, then called revenge porn, and cyber stalking. But the hardest actor to get on board to address non-consensual intimate imagery and cyber stalking was Google. The hardest. YouTube in meetings, to say, "We don't touch search. We don't do that," which was absolutely bullshit, excuse me, and not true.

"We never touch search." Untrue. We're going to de-emphasize mugshots.com but could care less about non-consensual intimate imagery. So the monetary engine that's driving all of this led to a very strike-oriented, at some platforms including Twitter and Google, like, "Nope, we're not doing anything." And that wasn't because Del Harvey didn't care, and it wasn't because Leah and Yonatan didn't care. Oh, they cared. They had me. They read my books. They handed out copies to 100 people. They cared. But it was the economic model that prevented at least the C-suite, as I was told, C-suites not having it. And the advertising is not... In the United States, advertising is covered by 230. The argument from Google is so two-faced. It's my speech. It's not my speech, depending on whatever the litigation is.

And so in this era of 2009 to '13, these companies are saying, "Oh, advertising. That's someone else's speech. We're covered by 230." And they made the very same arguments with regards to civil rights law and its leveraging up until two years ago, literally. So it's absolutely not true in the United States. So 230, yes. Advertising, regulated and hate speech, in the EU, in countries outside of the EU, as this goes to Ari's point, as these platforms start to really become, they are geopolitical, they are across. They're way behind the United States. Their biggest users are in India. Their biggest audiences are outside the United States, without question. But still, in the United States they are dealing with issues very differently because of the business of [inaudible 00:24:03] and shares.

Dean Jackson:

You've walked me into my next question really nicely, which is about the role that economic incentives and trends play in this story. Just a moment ago one of you said that this isn't the end of trust and safety, which is the name of yet another article about the state of trust and safety. I'd also throw in as a reference here, Kate Klonick's, The End of the Golden Age of trust and safety, to talk about the era, Jeff, that you were describing in which there was this window of opportunity. And it sounds quite normal to me to talk about the triangulation of external advocates, internal advocates, and key decision makers within companies. That's a very reasonable strategic lens to view these conversations through. But the piece that we're all here to reflect on I thought was really critical of the perception that economic trends had driven changes to the trust and safety industry, including mass layoffs over the last few years, the rollback in...

You may not explicitly blame economics for all of these, but just to list some of the trends, the rollback of fact-checking programs. Replacement of procedures like human rights impact assessments with artificial intelligence systems. And when you look at all of these things together, the message seems clear that, for companies at least, trust and safety is viewed as more of a cost center than an investment, and yet there's more than economics and dollars and cents to the place that trust and safety ends up today. So what are the concerns you have about the evolution of trust and safety? Where do you find it in 2025? And what worries you? What trends worry you about the space?

Ari Ezra Waldman:

There is a role that economics, dollars and cents, play. And we don't say otherwise. Neither Danielle nor I, whether in our piece or elsewhere, never do we say that there is no role that economics and the finance team plays in the evolution of trust and safety. But to suggest, as many other scholars and commentators have suggested, that the reason, and I'll give them the benefit of the doubt and say the primary reason that trust and safety is changing or has died or we have passed the golden age is simply a matter of, it's a cost center and now there isn't a lot of money, is insufficient.

It's not wrong. It's part of the story, but it's incomplete. And it's a pretty superficial story. If you listen to what both Jeff and Danielle were talking about, I can quibble a little with Jeff's characterization of what constitutes trust and safety and then the value of these "putting up teams" and all these products. These are organizational stories. These are questions about how different parts of an organization learn and respond to outside, external or exogenous, and internal or endogenous stimuli or challenges. So to suggest that this was only because of money ignores the fact that organizations operate in certain ways. And organizations operate in ways that are independent of money.

They operate through how they've organized reporting structures. They operate through how they define what is and what is not in this department, meaning that if I say something is part of the IT department, then the IT people are in charge of my budget. But if I split a department's budget through four different other departments, then I'm going begging for four different people to try to get money for what I have to do. That has nothing to do with economics. Everyone's budget gets cut at the same time. That has to do with organizational structures.

So what was great about working with Danielle on this paper was that we come at this from Danielle's extraordinary, unmatched perspective on the inside of doing this work. And I came at this from being inside these companies and just learning how they operate. And putting that together. So when we talk... and I'll give one example and then I'll stop talking. Jeff talked about this really wonderful moment inside a company where something happens and then all of these different products put up teams, and then everyone is talking about integrity.

And let's, for the moment, assume that all of these things are under this topic of trust and safety, although I have normative reasons to want to resist that too copious of a definition. Too capacious of a definition, but let's assume that for the moment. That sounds great but the organizational story reminds us that any team can set up whatever team they want. I can set up whatever kind of team. I can set up the most progressive, the most design-oriented, the most structural reform teams.

Think about Congress. Congress can set up whatever kind of committees that it wants. But if nothing happens with those teams, if those teams are de-skilled or if those teams are ignored or if those teams are given a structural position but no internal structural way to access power, then what's the point? Almost it is a tool of people in power to allow sub-departments to set up whatever kind of teams they want to make them feel like they have a voice but then to disempower them. To make sure that that voice never does anything or never results in any success or any final determination that affects the business. So we have to move beyond just the economic story to understand the organizational story.

Dean Jackson:

The period from 2018 to 2020 seems like an aberration just because of my own work looking at, especially Facebook's response to social issues in the United States during that period. It would be hard for me to say that those internal organizational teams didn't have impact and didn't successfully advocate for even, sometimes, algorithmic or design changes. And yet we've ended up, five years after the 2020 election, I think in a place that is much worse and where those teams do seem disempowered and perhaps de-skilled. And so Jeff, if you want to react to Ari, but also maybe give your account of the trajectory of the field over the last five years and why it is where it is. I think that'd be really helpful.

Jeff Allen:

I largely agree with Ari. And I can even say that the foundation of the Integrity Institute owes itself to a lot of those impacts and effects that he's talking about. So yeah, 2018, 2019, let 1,000 flowers bloom. It's a beautiful moment. Early 2021, Facebook is re-orging the civic integrity team. And a lot of those people ended up being like, "There's no integrity roles for you." And something like half of the people that were working on civic integrity were like, "Yeah, there's no more headcount for you to work in integrity, period, you need to find something else to do at this company."

And that led to enough disgruntled integrity workers to get a handful of us to be like, "Okay. We need organizations in the public that are saying the things that we want to have said. That are saying the things that'll help us win more fights from the inside." So absolutely, it was a great moment. There was a whole lot of activity around it. And Facebook does deserve credit for at least creating that moment and allowing people to explore these topics but, at the same time, we weren't able to have the impact that we needed to have to actually ensure that these problems were solved comprehensively.

And a lot of those teams got downsized or re-orged to the point where there was plenty of people at Facebook that were like, "Okay, cool. We need to think about, what is the solution outside of the company to these problems that we're trying to face? And how do we start playing a role in those spaces as well?" And so I think we have seen a change. I think if you look at the layoffs and the downsizing of the companies, I don't have a whole lot of data or intelligence that's telling me that... that hit trust and safety roles especially hard, except for a couple of examples.

I do believe that user research teams, particularly Meta, got hit really hard. And then partnership teams, also at Meta, got hit really hard. But apart from that, as far as I can tell, it was kind of a clean, across the board, every different part of the company is having layoffs. But the user research ones and the partnership teams, that is kind of telling in how the companies are thinking differently about this space now, because user researchers were the primary... that was the primary role for people that were talking to users, doing surveys, doing focus groups on, like, "Describe your bad experience of the product to me. Help us understand the bad experience better." And as far as I can tell, those are the teams that got hit particularly hard in the layoffs, which means that we're seeing that companies are no longer interested in being proactive and asking the question, how are you having a bad time on our platform?

Users, let us know. What bad experiences are you having? Help us understand your bad experiences better. And I think that's kind of backed up and further strengthened by reporting we've seen in The Washington Post, where the legal teams are putting really heavy restrictions on user researchers and how they can ask the questions, and what questions they're allowed to ask, and those kinds of limits. And so I do think there's really something to be concerned about there.

Dean Jackson:

I've seen some examples of some of those user surveys, and I think people really don't know how much time and effort Meta in particular at least did spend in the past surveying users on the experiences they had on the service. And it strikes me that the results of those surveys totally contradict the sort of move to free speech narrative that the company has leaned into in recent months, at least as a user desired phenomenon. People largely report that they want to see less of what we would describe as harmful content on the service. That they think they're not doing enough to keep hate speech and other things out of their feeds. They want more content moderation. Maybe more transparent content moderation, but... and I think that's really interesting. I didn't realize that those teams had taken a disproportionate hit in the layoffs.

Danielle Keats Citron:

Just one quick fun podcast to listen to. It's called Sudhir Breaks the Internet. So it's Sudhir's podcast in like 2021. But he's a sociologist working at Facebook. And he and his team were coming up with ways to how to deescalate abusive activity, and thinking about ways to have teaching moments with users. And when meeting with Mark and others to present, this actually is really effective. When we tell users what they did was wrong and they stop doing it. The response was, "That's too expensive." Like, "We're not doing that." So you have really interesting research teams come, and then he basically quits. He and his friends quit. They're like, "This is dumb. No one is listening to me. I'm doing this hard work."

So even in the golden age, which, I'm with you, Jeff. Working with these companies on Twitter's Trust and Safety Taskforce, Facebook's, then Facebook's non-consensual intimate imagery group, which met very regularly. And Antigone Davis is amazing. You had all these people who really give a rip. But it's when it's low cost. Often when it could be easily systematized, when it's not going to attract, "Oh, this is too expensive," then we're going to go to... It doesn't mean that... I'm saying economics. Just heated an agreement, of course, with my co-author. But it is. It's that combination of economics and structures and...

Well, if this is going to cost too much, we're getting rid of that structure. Or we're not going to pursue this teaching people that they shouldn't misbehave, even though it will redound, to enormous benefits, the prevention of mischief later. Societal harm averted. It's just too expensive to have people sit down and talk to people about what they were doing wrong, and just to explain the rules. So it kind of gives you a sense of the kind of deeper commitments, value commitments behind that.

As we saw with Facebook Live, Facebook Live gets rolled out and immediately, two weeks later, rapes live-streamed across the United States, even other countries. And Zuckerberg's response is, rather than hitting the brakes on anything, some costs are there. Like "Nope, we're just going to throw... We're going to hire 3,000 more content moderators." Like, "We've spent the money. We're knocking our scale back." And that doesn't help for Facebook Live, which then enables Christchurch massacre being live-streamed. Does that makes sense? What do they say? You move fast and break everybody. You roll out these... right?

Dean Jackson:

Mm-hmm.

Danielle Keats Citron:

These products. You don't listen to the insiders or you don't ask at the high level. You roll them out and then you say, "Let's tack on safety afterwards." And sometimes as a matter of design, Jeff, you know better than me. Sometimes, as a matter of design, you can do something, but once stuff is out there, it's very hard. It's often like X posts, and that's not helpful.

Dean Jackson:

I want to ask one question about compliance culture, and then I'll try to bundle my last couple questions together, which will be about politics and recommendations, and whether or not we should be pessimists or optimists about this field. But when I was reading your piece, it reminded me of another piece by Daphne Keller at Stanford on the rise of the compliant speech platform. And you've just talked about the way in which companies search for low cost ways and structures in which they can take action when they need to, but not necessarily as much action as internal or external advocates want. And you write that proceduralist compliance is the dominant form of governance practice in neoliberalism. That when regulations like the Digital Service Act in Europe, or a variety of proposed or passed laws in the United States proceduralize trust and safety and make it into a reporting function, that this could have a negative impact on the field. That it could replace real protections with forms of paperwork.

And when I read this, it's a common critique and one I've seen before and, I think, a really real risk. One people are complaining about. But I always find myself asking, why is compliance the ceiling for corporate responsibility instead of a floor? Why, if there was an era when companies, under public pressure, were doing more and now they are doing less? Should we expect them to always do less once rules are institutionalized? And this might be an organizational theory question. There might be reasons why. But I've never heard an answer to that question. And I'd love if you could maybe explain your thinking here and then respond to that quandary.

Ari Ezra Waldman:

It's a great question. And a socio-legal theorist who recently passed many years too early named Lauren Edelman, who was a sociologist and a law professor at Berkeley, did a lot of work during her time on what she called legal endogeneity, which was the foundation of my theory I applied in my book on privacy practices inside companies. And I'm going to refocus the question a little bit because it's not a matter of always doing the least. Compliance culture or legal endogeneity culture inside a company is not always about doing the least. It's about how companies translate external or exogenous legal requirements into internal practices. And this sits in a classic piece, a classic form of socio-legal scholarship. There's law on the books, there's law on the ground, and then there's something in between. This is called the gap. Gap studies.

Dean Jackson:

Very technical.

Ari Ezra Waldman:

Yeah, very technical. So who is in that gap? The regulated entities are in that gap. And it is the job of in-house lawyers, in-house compliance professionals, given the work that Lauren did about human resources, because she did work about discrimination in hiring and firing. So she did a lot of work with HR professionals. But there are people whose jobs are, inside the company, to translate what the laws are, and then it trickles down. But it isn't just the HR professional or the lawyer. One of the things that I tried to argue in my book about privacy compliance inside tech companies-

Danielle Keats Citron:

Can we say the name and when it came out? Industry Unbound: The Inside Story of Privacy, Data, and Corporate Power. And it came out in 2021. Sorry to plug, but okay. Keep going, Ari.

Ari Ezra Waldman:

Thank you, Danielle.

Dean Jackson:

For our listeners who won't be able to see the Zoom feed here, I think I see a copy on your bookshelf.

Ari Ezra Waldman:

Yeah, it is. It's right back there. It's-

Dean Jackson:

Got a lovely light blue cover.

Ari Ezra Waldman:

Yeah. And Tech Policy Press was kind enough to write a little piece about it. And we did an interview. Anyway, so in addition to these people, these entry points, having a lot of power to determine what these structures look like or what implementation of law on the books looks like, you have a lot of other people. So you have individuals that translate law into memos, into policies, and into structures. And then, especially in tech companies, you have project managers who take that information and then help plan how products are being developed. And then you have engineers and programmers who have to build products and code things based on some interpretation of what they have to do. And at each point it's like a game of telephone. The message degrades. And when I say, "Degrade," I don't mean that in a normative sense or a connotative sense.

It just literally changes over time because it's being interpreted by people who have different skillsets, who use different languages of communication and languages of power, meaning the HR rep or the in-house lawyer doesn't speak the same language or translate the same words as the coder does. And what that means to me is that it's really not just a matter of doing the least. You can have that structure and you could conceivably do something that's not the least, but what that structure does or what that process does is it inevitably relies on the way each of those people conceptualize their role in the company and how the organizational structures in which they sit either amplify or minimize that conceptualization and the effects of it.

So for example, a product manager or a project manager, product manager will have supervision over lots of different teams. They will do lots of different things. And if they conceptualize their role as someone who is just getting the job done versus someone who is trying to integrate all of these views, user views, design views, management views into a more comprehensive product, you're going to get teams that are tasked with different jobs.

Now think about a coder. If the coder conceptualizes their job as simply, I am here to solve really cool engineering problems. I don't care about anything else. Leave me to my task. They're then going to ignore or they're not going to be present for other concerns that the product raises. So we, and there have been other socio-legal scholars who have written about this concept of allies already. You need allies for a substantive goal inside the company in order to achieve that goal.

So if you have allies already, if you have people inside the company who conceptualize their role in certain ways, then you could do more within those structures. And it's not just the in-house lawyers. Lauren Edelman, in several of her projects, identified how different in-house lawyers conceptualize their role differently. Some of them think of themselves as guardians. Some of them think of themselves as business people who are trying to just advance the business of the company. Some of the guardian role thinks about themselves very differently, and then they do things differently as a result of how they conceptualize their work.

So that's going to happen at any company. But the nefarious and the mischief happens when companies use their structures or use their policies to manipulate people into adopting certain orientations that advance what the company wants as opposed to what those individuals may think is the best. So I think we need to... It's one thing. It's very easy to say, "Well, any company that's trying to seek profit is going to do the absolute least they can in order to comply with the law and in order to... but to make the most profit."

Compliance there is a dirty word. But even in the context of a company with people who want to do more and with structures that allow those people to do more, my concern, and our concern with a compliance function for trust and safety is its translation of how the mechanism of power. The mechanism of power goes from a holistic cross-organizational, cross-department approach, meaning that you have people who are thinking about the rules, you have people who are thinking about better design, you have people who are thinking about all the other elements of creating healthy informational environments.

And then you turn it into a thing that a single department is doing, whether that is simply pushing papers or that is just something that a single department is doing that they consult with other people. One is qualitatively a better fit for trust and safety. Pushing papers may be a qualitatively better fit for something else where all you need to do is just check a box, but it's not a good fit for what we recognize as making healthy safety environments, which, I thank Danielle for that phrase, which takes a more holistic structural approach.

Danielle Keats Citron:

And that's what Vijaya Gadde, we saw at Twitter, I wish I could come up with that concept. Ari, you're so sweet. But Jack Dorsey comes aboard 2019... or no. Sorry, 2015. Is like, 'We want healthy speech environments.' And he empowered his GC, Vijaya, to do that. And so it was a cross. Not just trust and safety, it was products. What they did was have this very holistic approach because there was C-suite commitment post-2016. And so... but that gets, and this is what Ari and I were relying on Ari's work and Edelman's work on legal endogeneity to show that once we're in this, like, "Strip down everything." We saw the firing of trust and safety employees, hollowing out or destruction at X. But really huge firings across all these platforms.

And then of course the rules that exist are in the EU. And so this pivot is not one of these capacious folks who are like, "Let's think system-wise about healthy speech environments." But where you see this compliance culture that Edelman and Ari explored in their work. How that translation becomes much more mechanical. Am I right? Tell me. How am I doing, Ari? That shift with the destruction that we wrote about in our piece from 2021 on, especially after X but then even more recently at Facebook, YouTube and others. Twitch was doing this stuff and then it wasn't. Right?

Ari Ezra Waldman:

Yeah.

Danielle Keats Citron:

We were on the Trust and Safety Council and then we were fired. Right?

Ari Ezra Waldman:

Right.

Danielle Keats Citron:

Right. So that's the compliance kind of mindset. And that becomes qualitatively different, right, Ari? That's the shift. And that relies on Ari's and Edelman's work on legal endogeneity that I think we've seen play out.

Ari Ezra Waldman:

And I think that's why it's wrong, or insufficient at least, to talk about trust and safety, or the end of trust and safety or whatever, as if these firings are the triggers. These things are not. This is not new. This is part of a longstanding process. And even if those firings didn't occur, the compliance shift to... the shift of trust and safety to a compliance, single departmental function as opposed to a holistic function, it also would've been problematic, which is why the economic explanation is insufficient.

Dean Jackson:

I wrote a response, of course, to your piece in Tech Policy Press. One of the things I said in my response was that the piece was sort of slow to incorporate politics. That alongside and in parallel to all of the organizational, legal, and economic trends you discussed, there were of course many storylines of political drama. And as part of that was that there was political pressure from civil society, from citizens, from politicians, from government. And those have also, in my understanding, in my view, played an important role in driving the evolution of trust and safety. And so I wonder, do you disagree? Or is there a reason, maybe, that you shifted politics to the background in this piece? How do you understand the role of politics? That's one. And I know that I've asked you to be quick, and that's a tough question, but if you could concisely maybe just give us a flavor of response.

I do wonder if my own take on this comes across as too grim or pessimistic. Jeff, your cover letter in the Integrity Institute annual report I thought struck a pragmatic tone about the types of achievements former trust and safety professionals can achieve advocating from the outside. And then finally, I want to give you space to talk about responses. And I especially liked the suggestion that trust and safety workers do things to increase their leverage at companies, including through labor organizing. And so if you could talk a little bit more about that as a sort of political economic response to the trends you've identified. But yeah, maybe we could do Danielle, Ari, and Jeff. And choose from that buffet what you want, because I know it's a lot to do in a little, short amount of time. But if we could hit some of that in our remaining time together, I'd be really happy.

Danielle Keats Citron:

Okay. So two things in this lightning round that I think we did... By my, I think our view, gendered violence, gendered abuse is political all the way down. And we are both on the board of the Cyber Civil Rights Initiative and had been pressing and working from what we, I think, understand as politics. We are engaged in policy at the state, federal level at CCRI. And highlighting what we often ignore, gendered abuse and homophobic abuse, that's our project. So I don't think we are ignoring politics. In fact, we are amidst politics in our work on the advocacy side. Right?

Dean Jackson:

Mm-hmm.

Danielle Keats Citron:

And the second, in 2016, after the election, I think we wrote about this in the piece. I was invited to meet with Jack Dorsey, the former head of the New York Times, and the head of the J School to spend a day together at Berkeley. Spend a day together talking about the disaster of Russian disinformation, and the campaign. And how do we create what he wanted us to figure out? Trustworthy... how do we earn the public's trust in an era of distrust and disinformation?

And so we spent the whole day talking about how Twitter might do that. It was Jack, Vijaya, Del, myself. And the name... he's amazing. They named two names of the two men beside us. It's escaping me. But that was all about politics. That was about Jack sitting down for a day to figure out, how do we deal with disinformation, deep fakery, cheap bakery, and foreign malign influence campaigns, and how it's often accomplished by online abuse that's gendered? So that was a day of politics. So do you think we... I thought we were making a case of how politics was in there throughout.

Ari Ezra Waldman:

I think Danielle's right. We certainly talk about it. But if giving all credence to your question, if by politics you mean the influence of the current administration and its campaign not only to have its allies buy up communication platforms, but also Elon Musk's very intentional campaign at what used to be called Twitter. If you're talking about that, I think it's actually important that we don't attribute all nefarious behavior to these two people because then we suggest that was an aberration from a longstanding trend. Trump is not an aberration from the Republican Party. He is the apotheosis of where they have been going for a really long time.

Musk is not an aberration of tech CEOs. He is the apotheosis of where they have been going for decades and decades. And we see it in so many others, from Peter Thiel to Mark Zuckerberg. These toxic men are part of a neo-fascist... That's a weird word to say. This neo-fascist movement that finds allies in the current White House. So part of the purpose of our piece was to recognize that this trend toward compliance in trust and safety is not purely an economic story, that it's an organizational story. The extent? If we were to argue that this is a result of recent political developments, I think that would actually undermine the problem.

Danielle Keats Citron:

It'd be wrong. Yeah.

Ari Ezra Waldman:

Because then... yeah, not only wrong.

Danielle Keats Citron:

Yeah, but-

Ari Ezra Waldman:

I think it would undermine our understanding of the problem because then you could just say, "Oh, once we fix that..." so not to say that it wouldn't have been turbocharged, but I don't think it's an apt characterization of what's happening right now.

Dean Jackson:

I do think we're talking past each other a little bit because I agree with everything you both have said but, when you limit it to this piece in the story it tells, I don't think you can tell the story of everything that's happened since January 2025 without also understanding the 2020 election, January 6th-

Danielle Keats Citron:

Oh, yeah.

Dean Jackson:

Jim Jordan and the Weaponization Committee. And going back even further to things like Senator John Kyle's audit of anti-conservative bias at Facebook.

Danielle Keats Citron:

Sure. Yeah.

Dean Jackson:

These narratives about content moderation have been around for a long time and have impacted the industry and the thinking of people like Mark Zuckerberg and Jack Dorsey and led us, actually, to the things that you just described as the culmination of political trends, not aberrations.

Ari Ezra Waldman:

That's fair. That's fair.

Dean Jackson:

So I completely agree with you. And yet there's so much story, and maybe there's just not enough page length.

Ari Ezra Waldman:

I think that's it. I think that's another... I think a few other hurdle.

Danielle Keats Citron:

I have two other articles that Ari and I have written in this vein.

Ari Ezra Waldman:

Yeah.

Danielle Keats Citron:

We wrote a piece called "Digital Authoritarianism" in U Chicago Law Review online. And we have a piece forthcoming in U Chicago Legal Forum about privacy and democracy's destruction, which, those two stories are followups to this piece.

Dean Jackson:

So it's a trilogy?

Danielle Keats Citron:

Yeah. Oh, we got a book, too. Unrelated. Youth Privacy. You want to talk youth privacy, we're there. But does that make sense? So there is a trilogy or plus in this part of the story. So we could keep talking about politics.

Ari Ezra Waldman:

And we will cite your review of our piece.

Danielle Keats Citron:

Yes. Right?

Dean Jackson:

Very kind.

Danielle Keats Citron:

In a way, for us it is like an ongoing... there's like two strains of our work together. One is youth privacy, which we haven't talked about and can. And the second is the relationship between democracy, privacy, speech, and online platforms. And content moderation, of course that's imperative to that discussion.

Dean Jackson:

Well, we really need to wrap but I want, Jeff, if you could just maybe take us out, play us out with a response to, is the vibe around trust and safety right now too pessimistic? Should we be looking for reasons for hope?

Jeff Allen:

Yeah, I'm happy to dig into that and to tie it into the politics a little bit. My general take on the politics is this isn't particularly new. It feels new, maybe because it's happening in the US. But if you look at it from the global lens, this has been happening all along. And India not too long ago was threatening to arrest all the Facebook employees that were based in India. There's the case of Turkey in the last presidential election, and the deep, deep, sad irony of the fact that Turkey President Erdogan threatened to turn off all the platforms if they didn't remove his political opponents. Twitter/X complied with that. And of course the sad irony is that Twitter and X had just laid off all of the employees that would've helped him navigate that situation to maybe a better outcome, because you don't always have to comply with those kinds of things.

And so, yeah, to the story of, what are those people who are laid off and who are caught up in the sad state of the industry downturn right now? And of course it is sad that we're in this industry downturn. There's a lot of people that I want working on the inside of the platforms. The world is much better off when you have people who care about these things working on the inside of the platforms, because the alternative is nobody who cares about these things are working inside the platforms, and that's the worst outcome for everybody.

So it is sad about the industry downturn but if you do want a little bit of silver lining it's, a lot of these people are still staying engaged in this space. And maybe they're not at a big tech company, but now they're filling out the staff at civil society organizations. Now they're filling out the staff at regulators themselves.

And so we're seeing a pipeline being built between having that big tech experience towards regulating the industry itself. And if you want one little reason to be optimistic about where these things are heading and why there's still a little bit of hope in the compliance regime or the compliance era, we did a survey of our members about a year and a half ago around policy. And one of our questions was, would you like to see more regulation of social media? And 84% of our members said, "Yes, we would like to see more regulation of social media." So regulation of the industry is extremely popular within people who work in it. It's not something that they're scared of or dread. That being said, the type of policy really, really matters.

Dean Jackson:

Thank you so much, all of you. I know you also all have other places to be.

Ari Ezra Waldman:

Thank you so much for having us.

Danielle Keats Citron:

Thank you.

Authors

Dean Jackson
Dean Jackson is a Contributing Editor at Tech Policy Press and principal of Public Circle LLC. He was the analyst responsible for the January 6th Committee’s investigation into the role of large social media platforms in the insurrection. As a freelance writer and researcher, he covers the intersect...

Related

Perspective
Is Trust & Safety Dead, or Just Evolving?September 26, 2025

Topics