Home

Donate

Trust and Safety Comes of Age?

Justin Hendrix / Sep 25, 2022

Audio of this conversation is available via your favorite podcast service.

Next week in Palo Alto, tech executives that work on trust and safety issues will gather for the inaugural Trust Con, which bills itself as the “first global conference dedicated to trust and safety professionals." The conference, which takes place on the 27th and 28th, is hosted by the Trust and Safety Professional Association. On the 29th and 30th, the Stanford Internet Observatory and the Trust and Safety Foundation will host a two-day conference focusing on research in trust and safety (tickets are sold out).

As content moderation and other trust and safety issues have been, to put it mildly, at the fore of tech concerns over the last few years, it’s interesting to take a step back and look at the various conferences, professional organizations and research communities that have emerged to address this broad and challenging set of subjects.

To get a sense of where trust and safety is as a field at this moment in time, I spoke to three individuals involved in it, each coming from different perspectives:

  • Shelby Grossman, a research scholar at the Stanford Internet Observatory and a leader in the community of academic researchers studying trust and safety issues as co-editor of the recently launched Journal of Online Trust and Safety;
  • David Sullivan, the leader of an industry funded consortium focused on developing best practices for the field called the Digital Trust and Safety Partnership; and
  • Jeff Allen, co-founder and chief research officer of an independent membership organization of trust and safety professionals called the Integrity Institute.

Note: Stanford Internet Observatory researcher manager Renée DiResta is on the board of Tech Policy Press.

What follows is a lightly edited transcript of the discussion.

Justin Hendrix:

I'm very grateful to the three of you for joining me today. And I wanted to talk a little bit about the field of trust and safety, where we've got to with it. There are a couple of major events that are happening out on the West Coast at the end of this month, a conference, and a research symposium, that are both focused on trust and safety issues. So it felt like a good time to me to somewhat take stock of where the field is.

And I know just from my teaching that trust and safety has become a more coherent career in many ways. I now have students who have come through studying issues around tech and society, and who can look forward to moving into trust and safety organizations, and having a robust career in those areas. So, that's another thing that perhaps we'll get into a little bit, but first, does anyone want to just preview for us what's happening at the end of this month on the West Coast?

David Sullivan:

Sure. Let me also be clear, I'm not an organizer of either conference, but I'm an enthusiastic participant. There are two conferences, the first is Trust Con. That's the professional conference organized by the Trust and Safety Professional Association, which is another one of the many organizations that have cropped up in the last few years. Which I think attest to the importance of trust and safety, both within the tech industry, and within broader society. That one is really geared at the practitioners inside companies, or working with companies on this issue of trust and safety.

Shelby Grossman:

And the other one is the Trust and Safety Research Conference, which is co-hosted by the Stanford Internet Observatory and the Trust and Safety Foundation. We're hoping that this is the first annual Trust and Safety Research Conference, and it aims to showcase cutting edge trust and safety research that's happening both inside tech platforms, and also in academia. And to try to encourage research collaborations across academia and industry.

Justin Hendrix:

David, I want to ask you first, just to describe your organization, which has been around for a bit, but somewhat, I suppose, new, depending on how you define new these days. What is it? What does it get up to? And what do you do on a regular basis?

David Sullivan:

Sure. Digital Trust & Safety Partnership launched in February 2021, about 18 months ago. I joined in August of last year as the founding Executive Director. We bring together technology companies providing a wide range of digital products, and services to align around a set of best practices for trust and safety. And a commitment to having those best practices be assessed first through self assessments and ultimately by independent third party assessments, to be able to demonstrate that companies really are taking trust and safety seriously, and evolving practices in this field.

And I think the most important thing to emphasize, is that as we do this, we are trying to bring companies together around this framework of practices. It's not about content standards, it's not about trying to pursue a common definition to hate speech across many different platforms, and many different places around the world. But saying that there's a set of practices that are descriptive of what companies are doing, and which can evolve to be state-of-the-art over time from product development, through to transparency. That represent a common approach to this without trying to align everybody around the same types of content that should be allowed.

Justin Hendrix:

And Jeff, we've had you on the podcast before, but you've had some recent developments with the Integrity Institute, some progress, and some forward momentum with what you're doing. But perhaps you could remind the listener what it ism and what you're doing.

Jeff Allen:

Of course. The Integrity Institute is part community, part think tank. On the community side, we're a professional community of what we call integrity professionals. These are people who have actual experience working for social media companies on the platforms, tackling the issues of harmful content, that is so much on the news today. So they're the people who have actually fought misinformation, or hate speech, or self-harm content, from within the platforms themselves.

We've gathered together to form a think tank, and on the think tank side of the things, we're really trying to develop the discipline. Similarly, to the DTSP, we're working on trying to figure out what are the consensus points of integrity professionals that we can all agree on, "These are good practices when it comes to responsibly designing ranking systems, or being transparent with society about how the platform works, and how harmful content is spread on it and soon on it." And then sharing that expertise with the outside world, so we do a lot of briefings for policymakers, for academics, for civic societies, as well as companies too.

Justin Hendrix:

And Shelby, perhaps, you could just give us your perspective on the research community devoted to trust and safety issues. I understand there's a new journal that for the first time is specifically focused on trust and safety, and thriving interests in this area across multiple disciplines from communications through to cybersecurity.

Shelby Grossman:

By background, I'm a political scientist, and before I started this job at the Stanford Internet Observatory, I actually had never heard the phrase trust and safety before. And I think that's a common thing in academia, people in different disciplines use different terms to refer to the study of online harm. But a lot of academics, actually, aren't familiar with the phrase trust and safety. And that causes real problems because a political scientist might be studying how misinformation spreads, and there's all this really cool computer science work on this same topic that's being published in computer science journals. But they're just not coming across it, and these two disciplines, aren't speaking to each other. So about a year and a half ago, we launched this new journal, the Journal of Online Trust and Safety, that's trying to solve some of these problems.

So we aim to publish cutting edge, really rigorous, empirical research on online harm from many different disciplines. And then also we aim to publish internal research that's taking place at the platforms along with collaborations from these two entities. It's going pretty well, it's open source, we have a really fast peer review process to try to solve some problems that plague academic publishing, and get timely research out there quickly without sacrificing rigor. So we've had a bunch of issues, we're actually launching our next issue next week, and we're really excited about it.

Justin Hendrix:

I want to maybe just pause, and ask you each, to characterize where you think "trust and safety" is at the moment as a field. This journal is new, these organizations are new, this effort is new on some level, and yet the problems have been with us for a bit, and they seem to continue to get worse. There was just yesterday, a big hearing, of course, in the Senate where chief product officers for some of the major platforms testified. I would say, it was generally as contentious a hearing as any of the prior tech hearings have been, a lot of concern about online harms. Ranging from privacy and security issues through to misinformation, hate fuel, to violent extremism, a range of things. Where are we on trust and safety in September 2022 as the industry plans to gather in the Bay Area?

Jeff Allen:

Yeah. It's really interesting, because trust and safety is both old and new at the same time. And probably what's most new about it, is just society realizing how important it is. I like to point, when it comes to integrity work, which is our term for it, but obviously it's a variation of trust and safety. The first good piece of integrity work, that I like to point to, is actually Larry Page and Sergey Brin's 1998, Paper and Google. In the appendix where they talk about the page rank algorithm, they actually say, "Hey, search engines are ranking on search engines is valuable to people. And so people are going to try to manipulate the search engines in order to rank highly on it for their own self-interest. And here are ways that we can prevent search engines from being easily gamed by bad actors of very, of all sorts."

And they focus on the financial aspect of it, it's very lucrative to rank for best shoes to buy. And obviously marketers are going to try and take advantage of that. But the principles apply to foreign interference, to people trying to manipulate the public over all sorts of issues.

So integrity thinking, and trust and safety thinking, dates back all the way back to the beginning of the internet. But we're really having a moment now, over the past couple years. Where society at large is waking up that, yes, it is time for this practice to go from this thing that exists at companies, but isn't very formalized, to what will be a very established discipline in the future.

David Sullivan:

One thing that's really interesting, and that I learned from the first issue of the journal of online trust and safety. It's that the term trust and safety actually started, I think, at eBay in the very early 2000, which really speaks to Jeff's anecdote about Google in terms of the financial aspect of this. That even before the social internet you had companies whose their business relied on trust that the marketplace could work, and safety in terms of preventing fraud, where a lot of this thinking came from. So some of this goes back quite a long time.

At the same time, I want to say there's been an enormous sea change in the amount of information, as well as, of course, a societal concern about some of these issues in the last few years. One of the things that I think about, is when I started in my previous organization, the Global Network Initiative back in 2011, 2012. When companies were first starting to report information about government requests for user data, and content takedowns, no company was willing to report information about how they were enforcing their community standards or terms of service.

The first public reports about that only were issued in 2018. So we only have about four years of really public reporting about some of these really thorny issues about how companies moderate their services. And there's been just a tremendous amount of information that's come out since. That information begets more questions, and contested ideas about how do we define this term, or what are we talking about when we use this other term, and is this data meaningful. But there has been a really enormous change.

I think that the primary point, also going back to Jeff's, is really about formalizing and maturing. And that's our focus at DTSP is to say, there's a common set of practices, companies can use there's documentation of those practices, subjecting those practices to assessments and audits. Some of this is going to come from regulatory requirements that are happening around the world, in states here in the U.S., less so from Congress these days. But some of it's going to come from professional standards, which is something that you see in all sorts of other industries as they mature over time.

Justin Hendrix:

Shelby, do you feel as if the research community around trust and safety has a common set of goals? You mentioned, of course, there are multiple disciplines at play here. Is there a sense that there's a common thrust?

Shelby Grossman:

I don't think there's a common thrust, but I think that's maybe okay. I don't necessarily think it's a bad thing that different disciplines have their own approaches to studying online harms, I think it prevents groupthink. I think the problem is when different disciplines aren't aware that other disciplines are studying something similar, just from a different angle.

My hope is that with this conference, with the journal, that we're trying to at least make the disciplines more aware of research that each other is doing, that other disciplines are doing. One of the other things that we're trying to do with the journal is, there are a lot of really important online harms where if you were to study them, it's not really clear where you would publish your findings. And as an academic, publications are everything, that's how you get professionally rewarded. So if there's no place to publish really important research on child safety, you're just not going to do research on that topic. So our hope is that over time, we incentivize research on some of these under-studied, but really important online harms.

Justin Hendrix:

Let me ask you just a couple of diagnostic questions about this broader community concerned with trust and safety issues from research to industry, to outside civil society groups like yours, Jeff. One of the problems that we know is currently an issue. With regard to trust and safety at the major tech platforms, is that a lot of the focuses on the United States, a lot of the focuses on English language. Do you feel that this general effort to institutionalize as it were, or to formalize, or to somehow mature the industry is moving quickly enough to take that into account?

Jeff Allen:

This is a really important question, and a question that we could be asking the platforms is, "What is the process by which you determine that your product is ready to launch in a particular country?" I think probably in the early days of social media, probably that preparedness checklist was like, "Does the language locale render properly in the app? And if so, then we're good to go." But clearly there's a lot more that needs to be in that checklist. Hopefully we are building in that direction.

I think there's plenty of examples that you can point to anecdotally of companies taking it seriously internationally. There's also plenty of examples that you can point to of companies not properly prioritizing it internationally. So it is very much an XBAG, and there really isn't an accepted industry standard for what does it look like to launch responsibly in a particular region?

Justin Hendrix:

David, is your membership showing signs of becoming more global?

David Sullivan:

Absolutely. Our aspiration is to be a global partnership, and to set industry standards globally. I think there's a recognition that there is a lot more work to be done to ensure that content policies, and practices are adopted, and executed across multiple languages in an equitable way. That is really a big challenge, and there needs to be a lot more resourcing dedicated to it. What I would say, is that I've worked with a lot of academics, and civil society organizations, and activists from many countries around the world. Who have been focused on these issues, and pressing technology companies about them for years, or decades, in some cases. And I think it's important that we not lose sight of, and that we make sure we elevate, and listen to, and actually learn from, and not just listen to them, but actually respond to them with meaningful changes. And I think that there is more attention to that.

I would say, that there's a tendency in not just technology companies, companies in general. That if there's someone trying to raise an issue in India, or Nigeria, and they're successful in even getting to someone from a company to talk to them. It's probably going to be a public policy person responsible for that part of the world. I think it has been a challenge for those people to talk to the people, somebody like Jeff or working inside a company. And I think that the development of this field as both academic, and in industry, and the civil society world, hopefully that's starting to change. And we can foster conversations between people who are working on trust and safety on engineering, on technical issues, as well as policy issues. Not just have activists being talking to people who are coming from a government relations, or public policy perspective.

Justin Hendrix:

Shelby in the research community that you're building perhaps around the conference, that's upcoming or so far with regard to submissions to the journal. Do you feel that there's a global community coming together?

Shelby Grossman:

I think we're moving in that direction, but it's definitely an issue. So with the journal, the overwhelming number of submissions that we get are about studying the U.S., and Western countries. For the conference, we got about 275 applications to present, and it wasn't as bad, but still the majority of the proposals related to Western context. But I think we're definitely seeing some change there. In some trust and safety areas, I actually think there's an incredible focus on non-Western context. For example, when Meta and Twitter suspend these foreign influence operations, they're almost always targeting non-Western countries, so I think that's a positive sign.

Justin Hendrix:

With a notable exception, I suppose, of a campaign that SIO and Graphika revealed just a couple of weeks ago, that appeared to be the first one known of Western origin. Let me ask you this. There's probably somebody listening to this saying social media, big tech firms, trust and safety, oxymoron, these things don't go together. We all just a little bit, like, "I suppose folks in suits with Geiger counters, walking around measuring things at a nuclear blast site. These social media platforms are out of control. There's lots of harm, very little of it is addressed, and we've still got such a long way to go." I don't know. How would you address that if you were encountering that particular pessimist on the street?

David Sullivan:

Leading a partnership of technology companies. As a deliberately industry organization, I get understandable skepticism. I think the fundamental thing to remember is that these companies are businesses, and the unintended consequences of when people are able to abuse, and misuse their services to cause all different types of harms, those are serious problems. But ultimately, if they're not addressed, people will leave, they will find other platforms.

So there is a moral imperative for companies to do the right thing here, but there is also a business case for it. And the fact that they are investing substantial time, and energy, and resources, whether it's in our partnership or other efforts, I think does speak to a seriousness with which companies are taking it. That said, the level of effort does not always result in the outcomes that we want to see. And we see this in countless places around the world. My personal view is that the scale at which companies operate is not an excuse for not getting these things right. But it does just speak to the size of the challenges here that are going to take time to be fully addressed.

Justin Hendrix:

Jeff, you work inside Facebook, and work on an integrity team. Do you have a sense of that platform, or others...? Not forcing you to necessarily comment on anyone in particular. How far are we off the degree to which these things should be resourced versus where they are at the moment?

Jeff Allen:

I'll counter your skeptic take with another skeptic take. I think there's two common attacks that trust and safety workers, and integrity workers can get internally, and both based on misconceptions, of course. So internally, when you're working at the company, integrity teams, and trust and safety teams, have to be worried about being viewed as a cost center, as about being the ones that are nagging the product teams to slow down. It's not uncommon for growth work, and engagement work, to have tension with trust and safety work. And if you're not structuring the teams, it's impossible to have those teams body and heads on the regular. And for the growth and engagement teams that view them as a cost center, that's just dragging them down. And then, you're exactly right, on the outside integrity teams, and trust and safety teams, are sometimes inappropriately viewed as sort of PR teams.

They're the band-aid on the open wound, and they're there to make the company look like they're trying to solve the problem, but without actually doing it. And to lean on your analogy, there definitely is room for concern on the outside, to your analogy of, "Are we just people with Geiger counters?" Yeah, there are a lot of reporting mechanisms that are just, "Look at all these Geiger counters we have. We have so many Geiger counters without actually getting to the problem." We're like, "Okay, but where is the radiation coming from, and why is it being produced?" And that is, of course, really where we need to get to when it comes to transparency.

But I think on the business incentive side, it is just wrong that integrity teams, and trust and safety teams are cost centers. There is room for tension here, and I think the tension is really short term growth for the business versus long term growth for the business. I think short term growth, is you will see more tension between what the trust and safety teams are doing, and what the growth teams are doing. But long term, thinking more long term about the health of the business over long term, growing the business over long term, that does skew things more towards the trust and safety side.

So I think what successful companies will be doing here is figuring actually how to find alignment between the trust and safety teams, and the growth teams. So that it's really growth through integrity growth with integrity, which is going to separate the companies that are still here 10 years from now, from the companies that fold three years from now.

Justin Hendrix:

And there was some discussion in that hearing yesterday about whether essentially trust and safety goals are incentivized in these companies. I think Chris Cox at Facebook in particular was asked whether given they had shut down a responsible innovation team, and Chris Cox said, "Of course, that everybody at Facebook or meta generally is incentivized to pursue trust and safety." So he was asked, "Are they remunerated? Are they measured against those goals?" And I think the answer, unfortunately, right now is, not yet or no. But Shelby, I want to give you an opportunity on this broader question.

Shelby Grossman:

Yeah. I think those Geiger counter critique is important to address. So, as an academic group that does not take funding from platforms, but collaborates with platforms in various ways. We get a couple of interesting critiques that I've thought a lot about. One thing that we do is, as I mentioned, when platforms suspend these foreign influence operations. Meta and Twitter will sometimes give us a heads-up, and we'll write an independent report about the network, and release it at the same time that the platform announces the take-down. So one critique that we get is, "Aren't you just acting as a PR tool for the platforms? You're helping them publicize the network that they are willing to publicize." And I think when I hear this, my thought is, "What's the alternative? Would you prefer a world where they're just not sharing these networks with independent research groups?"

And similarly, I haven't heard this critique yet, but I'm prepared for it. Because this journal of online trust and safety, because we have published research from platforms, and we're actually going to be publishing more research next week from big platforms. I'm prepared for the critique of "No, isn't your journal just acting as, again, a PR platform for these platforms?" And again, I just think, "Would you rather that these platforms be doing this often really rigorous, internal trust and safety research, and then not sharing it with the world?" So that's my response to that.

Justin Hendrix:

Let me ask you then a question, and all of you're welcome to get on to this. David, I don't think it necessarily makes sense for you given the nature of your organization, and how it's structured, this question doesn't really make any sense. But Shelby, as you think about that, as you think about this reliance on tech firms, it's not just money, of course, as you say, it's data. In some cases access, other, perhaps forms of relationship that could exert influence over the way that you do what you do. How do you guard against undue influence from industry?

Shelby Grossman:

We do a lot of research, we put out a lot of research that criticizes these same platforms that are giving us access to these data sets. So that's one thing I'll say, we put out a lot of research on self-harm policies at various platforms, and a lot of that work has been critical of some of the groups that we partner with in various ways. And I just think in general, the only feedback that platforms ever give us on reports is a request to anonymize an account they're not providing ever in any case, any substantive edits, or anything like that. Those are some ways, and all of our partnerships with platforms are formal contracts that are signed by Stanford University. So they put in place all of the standard academic freedom provisions that are required for those relationships.

Justin Hendrix:

How about you, Jeff? How do you as a kind of third party research and community?

Jeff Allen:

Yeah, we are pretty proud of our independence of the platforms. We actually have our oath that we ask members to take, which involves being independent, but also being constructive. So that's why a lot of our research is, "Okay, here is this problem. But also, here is some examples of pathways you can go to start tackling this problem." And just pairing the criticism with solutions is an important step towards maintaining that independence, but also maintaining that credibility within the platform.

Justin Hendrix:

So I want to ask a question about the U.S. context for some of this. Again, at the hearing yesterday, there was certainly concern over the trust and safety efforts of the platforms from both sides of the aisle. But I think it's fair to say that there are different concerns on the right, and that the two sides of the political spectrum in this country have different issues with the tech platforms.

And I know that there could be a contingent way of looking at trust and safety activity. Given those asymmetries that regard it as a political project, in some way, there's tinged with the politics even if it doesn't intend to. That seems to be a core tension in some of these hearings in yesterday's hearing in the Senate, even in the morning, when you had the experts who were former employees of the platform speaking. They were doing their best to say, "I never saw decisions being taken, for instance about content moderation policies, or specific content moderation decisions that were politically motivated." And yet what you're hearing from the other side is that, "We see an asymmetry in the application of your policies." So how do you think that trust and safety folks can manage that going forward? Does that become a problem in the future, if that phenomenon continues to play out?

Jeff Allen:

I think that there definitely is a risk, and we're probably already here where a lot of this work is being politicized for political reasons. I think the proper way to navigate this is to increase transparency. Both sides are saying, "It's our side that's being penalized." And it's because they're hearing from their side when it gets penalized, and they don't hear from the other side, when a content creator, or a publication that is on one side of the aisle gets taken down. They don't go to the other side of the aisle representatives to complain about it, they go to the representatives on their side of the aisle. So each one says like, "Oh, they're attacking us. And it's all biased." There's one moment when they even talked about bias in the fact-checkers, which, if you actually study the work that fact-checkers do, they're fact-checking both sides, they're dinging both sides.

I think it's mostly a bias in what they hear, and the stories they hear. And it is because we don't quite have enough transparency. When you're working on these platforms from the outside, really all you can do is just gather anecdotes. And the best practice is to gather enough anecdotes, so it starts to look systematic, but you're still gathering anecdotes. And it takes a lot of work to make sure that your process for gathering the anecdotes isn't biased in itself. So hopefully increased transparency will help here.

Another thing to think about is, if you're worried about the platforms being biased. Remember that the platforms are global companies, and they're operating in countries where they don't even understand the political context, let it alone understand it enough to be biased one way or the other. So it's important for the platforms to be building processes that work when they are completely ignorant of the political situation that's going on, and still able to operate in a fair way. Obviously, operating in a fair way means learning enough, so they're not completely indirect ignorant of the local situation. But building out processes that are robust to operate when you aren't experts on the ground, and don't have the capacity to have any bias, having more transparency into what those processes look like. And also having more transparency in how they're applied, and what the impact of it is, will definitely be one path forward in here.

Shelby Grossman:

I agree with what Jeff was saying. My team was part of this thing called the Election Integrity Partnership in 2020, that monitored social media platforms for misleading narratives related to the 2020 elections. And through that work, we have some summary statistics about X percent of the misleading narratives that we saw on Facebook or Twitter, were right leading versus left leading. But the thing that I find really frustrating is to the extent that we have those numbers is for platforms that had really accessible APIs, that made it possible for us to capture those statistics. We weren't able to do that, for example, for TikTok, because it was just so much easier to search for these narratives using CrowdTangle, or using the Twitter API.

It was just really frustrating, understandably from a rational choice perspective, to watch Meta disband the CrowdTangle team because they were getting all this negative publicity because they were making it so easy for people to find misleading content on their platforms. So I think things like the platform transparency, and accountability act, make a lot of sense because they would require all platforms to share certain types of information with qualified researchers. Which would just level the playing field, and solve this problem.

And the only other thing I'll say, is that I think there's a lot of focus about political bias in the U.S., but it'd be great, per your earlier question, Justin. About the international context for there to be more research about the extent to which there's bias in moderation in other countries, that it's really hard to do. We don't know a lot about who the content moderators are for content that is in languages that aren't very widely spoken. So I think that type of information is important.

Justin Hendrix:

Just want to push David you on this same question, but maybe in a slightly more specific way. There were a couple of questions in the Senate hearing yesterday, that got around to the question of, whether platform executives "collude" on decisions they might take around content moderation, or policy. And you're running an organization where policy executives from the platforms talk to each other, share information generally, I'm sure, open up on some level about the challenges that they're facing. Do you worry about potentially being drawn into some conspiracy theory with regard to how these things work?

David Sullivan:

Well, like any industry organization, we have robust antitrust compliance concerns to make sure that we are managing the conversations that we have the right way. But I think that any initiative in this space, including all of our work in this conversation, and anyone who's touching upon these issues at all, risks being. You have to think about how can your work potentially be weaponized by some bad actor, not unlike the way trust and safety teams inside companies have to think about how their products can be abused or misused.

I would say a couple of things, this summer we released a report on the first evaluations of 10 of our members. That report it doesn't say that Meta is doing this, and Microsoft is doing that. It aggregates information, and it provides a starting point for discussions about how mature different practices are. And going back to points that both Jeff and Shelby had made, the least mature practices that companies as themselves assessed to be the case, was a particularly around support to academics, and researchers, and more broadly work with external organizations. Whether that's fact-checkers, human rights, and civil society organizations, whether that's getting user input into how policies get developed, and how they are enforced.

And I think that there's just so much of this work that has been close hold inside companies, particularly, because they are worried about how bad actors can gain their systems, and their policies, and their procedures. Because of that close hold nature, it has led to either conspiracy theories, or this general lack of transparency that gives us less information to go on. So these are hard things to think through, but there's a lot of room to increase transparency here.

One other point to make on these political divides, is that it's easy to say... and I think I've said it in things I've written, that in the United States, Republicans want less content taken down, and Democrats want more content taken down. So while they all agree that something should be done, they can't agree about what to do about it. But I actually think that that is a pretty simplistic view. And yeah, I think about it, whether you look at the progressive side, you have organizations that have long been dedicated to press freedom. Human rights, who've been saying, "We need to have more transparency, and accountability about how companies make these decisions, not have companies taking down content of marginalized, or vulnerable communities, the content that allows organization communities to organize themselves, content documenting human rights abuses in different countries around the world."

And you have other organizations saying, "We need to do something about hate speech, and about extremists. Right wing extremists here in the United States or elsewhere." And on the conservative side of things, while there is a lot of perceptions of bias that I think are not necessarily grounded in empirical studies. You also have organizations including a lot of libertarian organizations that have stuck to their guns, and said, "Actually, this is going against some of the core principles of our political institutions on our side of the world." So I think that it's more complicated when you duck under the service, and there's folks who are making arguments that may run against the grain of the conventional wisdom on their side of things. And it's worth exploring some of those differences, and that companies should actually be talking to all of those folks to have a better sense of how they can manage these challenges.

Shelby Grossman:

I'll just say that I agree that I think it's important to have a diverse array of civil society groups in this space, that have different perspectives about what should be done. I don't think it'd be healthy if all academic groups, and all civil society groups, in this space had the same ideas about what should change about content moderation.

Justin Hendrix:

I'll ask you all just one last question. One of the things we've been covering a lot in the podcast is platform policies around elections with regard to the upcoming midterms, but a lot of announcements over the last few weeks about that. Of course, there are other elections that are happening in the world, Brazil is a particular point of concern, and a lot of folks looking at disinformation. And some of the same sounding threats from the incumbent there, that sound very similar to the former president here in the 2020 cycle. How are you all addressing election integrity issues in your respective efforts? And will it be a significant part of the discussion at the conference? Shelby.

Shelby Grossman:

The Stanford Internet Observatory is again part of the Election Integrity Partnership for the midterms. So we're working with the University of Washington to continue to monitor misleading narratives. And we're going to start putting out some work related to that.

Justin Hendrix:

David, are you all addressing election integrity issues, civic integrity issues in your organization?

David Sullivan:

Our approach is to be agnostic to specific types of content risks, to allow companies to anticipate whatever is the risks to their users, to people arising from their particular product or service. That said, something we have been thinking about, and something that's actually coming up in terms of requirements in the digital services act in the EU. It's for companies, and for the digital services in industry, to develop crisis protocols that they can use. And there's been a lot of work, a lot that's already done in this space when it comes to terrorism, and violent extremism. The global internet formed to counterterrorism has a crisis protocol that they employ.

David Sullivan:

But I think that there is room for the industry to think about other crisis protocols. Whether that's something generic that could be adapted to specific circumstances, but when you look at elections is one of the greatest areas of focus, and concern, for trust and safety teams inside companies. And where collaborative thinking really could benefit the industry as a whole. I think armed conflict is another area where these protocols could be a valuable tool in the toolkit.

Jeff Allen:

At the Integrity Institute, this is definitely something that we care about a ton. Obviously, an important point in the Integrity Institute's history before it was ever founded, actually dates back to the civic-integrity team at Facebook. And we have tons of members that have worked inside the platforms trying to protect elections, and trying to prevent any manipulation, or harmful activity around elections. So we have a ton of members that have experience here, we have a ton of members who still care about it very passionately, we have members that are working on it right now for the current cycles.

I definitely will say stay tuned for a lot of election work, and election related content from us, and also going forward. The midterms are obviously going to be leading into 2024 very, very soon. And the international elections does not stop with Brazil, we had the Philippines earlier, and we're going to have a wave. And we are lucky to have Katie Harbath as one of our fellows, who's been leading a lot of efforts in the election space, including with the bipartisan policy center. Actually, she has a lot of fun work tracking the announcements of the platforms, and what actions, and policies they are putting into place for the midterms. So we've been doing a lot of work around the elections, and it's definitely going to continue.

Justin Hendrix:

I'm going to ask you all a last question, and maybe it's an opportunity, if there's something you wanted to say, but didn't get in to mention. But I just want to ask you to cast your minds forward. The premise of this conversation was that trust and safety while it's been around for a while, while it's certainly different forms on the internet, and then digital commerce, and digital media for some time. It feels like a practice or set of practices that are maybe beginning to mature, or professionalize, in a new way. If you cast your mind's forward 5, 10 years, what do you think are the problems you'll be working on then?

Shelby Grossman:

Maybe I have two things to say on this. First, my team is increasingly starting research projects that are not related to information integrity. And I think it's great that there's been an increasing number of academics, and civil society groups that are interested in misinformation and disinformation. We're definitely still doing a lot of work in that space, particularly around the U.S. elections, but we're also trying to really start up some robust research projects, for example, on child safety, and self-harm, that kind of stuff.

And the other thing I'll say, is that the director of the Internet Observatory, Alex Stamos, teaches a class in the computer science department at Stanford called Trust and Safety Engineering. And the same time that he teaches that, I teach a sister course in the political science department, that's basically on the politics of trust and safety issues. So the goal of this course is primarily to encourage undergraduates who are thinking about maybe starting their own startup, or going to work at a tech company. To just be thoughtful about the ways in which products that they're working on could be causing human harm, and to try to get ahead of those issues. But also we get a bunch of PhD students in these classes, and our hope is that this inspires more PhD students to want to research online safety issues.

David Sullivan:

So, I think there's two aspects of this. The first is that I hope that, looking ahead, that in some ways we can make trust and safety boring. Well, you're never going to solve these problems because they essentially stem from human behavior, that much something like say, financial accounting, and financial risk management, or something like that. There's all sorts of issues that are happening all the time that people can be concerned about. But the nature of that work is accepted, and understood, and it's used to manage risk. And it's something that is not on the front pages every single day in terms of whether and how to handle this. And that will have accepted standards, and practices that will be helping companies address this, and it will not be the point of controversy every day in every way.

I think the second piece of this is that hopefully years from now will be having a much, much more internationally representative set of companies who are providing services to other countries all around the world. Who will be thinking about these things, part of the discussion, contributing to the research, and that this will be much less about a number of U.S. headquartered companies. And the impact that they're having around the world and much more globally and geographically balanced as a discussion.

Jeff Allen:

I don't know, thinking about the problems five years from now, isn't quite as fun as maybe thinking about the aspirational hopes five years from now. One thing we've definitely seen running the institute, is there's just so many people who really genuinely care about getting this right, and are really here for the long haul. That's actually one real big takeaway, is that all it takes is one year on a trust and safety team, or an integrity team, to convert that worker into an integrity professional and being like, "Ah, actually the real problem I don't want to work on isn't just tech, is how do we get tech right for society?" And we've seen so many people that have appetite, even when they leave the field, and go onto some other role in tech. They're like, "I'm joining the Institute because I definitely want to keep one foot in this space, so that I can come back in later on."

So I definitely think there's a lot of room to be hopeful, that as the discipline matures... well, we're already seeing the people mature. We're already seeing the people come together, and be like, "Ah, this actually does need to be a thing." So I think there's definitely room to be optimistic that it will become the thing. And really the question isn't whether or not this will gel into a formalized discipline. But what is that process going to look like? Who are going to be the voices that are leading that? And where are the ideas going to be coming from about what that should look like?

Shelby Grossman:

Building on something David said, I think one of my hopes for the future is that... so at the moment when platforms make announcements about foreign influence operations. It's typically no longer a scandal the framing isn't, "Oh, my God! There's a foreign influence operation on Twitter." The framing is, "Oh, this is interesting. Twitter discovered this foreign influence operation in Thailand. What can we learn about Thai politics from this?" And I think that's really been a neat development. And my hope is that we move in that direction for other types of online safety issues. Where platforms are just as transparent, and they're not disincentivized from sharing information about what they're finding.

Justin Hendrix:

Well, I hope I'll have the opportunity to speak to each of you about these issues as this field progresses, and as we learn more, and as hopefully the dialogue improves over the next few years. So Jeff, David, Shelby, thank you so much for joining me.

Shelby Grossman:

Thanks, Justin.

Jeff Allen:

Yeah. Thank you.

David Sullivan:

Thanks very much.

Authors

Justin Hendrix
Justin Hendrix is CEO and Editor of Tech Policy Press, a new nonprofit media venture concerned with the intersection of technology and democracy. Previously, he was Executive Director of NYC Media Lab. He spent over a decade at The Economist in roles including Vice President, Business Development & ...

Topics