Home

What Should We Know About Government Influence on Content Moderation?

Tim Bernard / Feb 8, 2023

Tim Bernard recently completed an MBA at Cornell Tech, focusing on tech policy and trust & safety issues. He previously led the content moderation team at Seeking Alpha, and worked in various capacities in the education sector.

J. Edgar Hoover FBI Building entrance, Washington D.C. Interactions between FBI and other government agency officials with social media platforms is under scrutiny. Shutterstock

Governments across the world have varied and complex relationships with social media companies. In countries where governments seek to crack down on free expression or police certain forms of speech more aggressively, such as India, U.S. firms such as Facebook, Twitter and YouTube must frequently respond to demands to remove material, or risk consequences ranging from fines to outright bans.

In the U.S., there has long been concern over what personal information law enforcement and other government agencies can demand from social media platforms—from the Snowden revelations about intelligence agency access a decade ago to worries about social media data being used to identify visitors to abortion clinics when the procedure was criminalized in dozens of states last year.

In the last few years, a new vector of government-platform relationship has come to the fore: concerns over government bodies influencing content moderation decisions, in particular when it is outside of established, publicly-known frameworks mandating the removal of illegal content such as child sexual abuse material, or other content that is clearly proscribed in a given jurisdiction.

For a significant segment of Americans, any interaction between government officials and social media executives with responsibility for content moderation decisions is of serious concern. This is especially acute for those on the right, given the widespread belief in conservative quarters that social media firms are biased against them:

  • Civil libertarians have decried what they see as potential “jawboning”—circumvention of First Amendment rights by state actors pressuring private parties to effect censorship.
  • Various legislative proposals led by Republicans at the state and federal level have sought to severely limit what content platforms can remove, often on the basis that content moderation decisions are politically motivated.
  • A lawsuit brought last year by the Republican attorneys general of Missouri and Louisiana claims that White House officials “threatened and cajoled social-media platforms for years to censor viewpoints and speakers disfavored by the Left,” a practice it says is conducted “under the Orwellian guise of halting so-called ‘disinformation,’ ‘misinformation,’ and ‘malinformation.’”
  • Matt Taibbi, a journalist selected by Elon Musk to review internal documents from the social media company, told Fox News host Tucker Carlson that "I think we can say pretty conclusively after looking at tens of thousands of emails over the course of these weeks that the government was in the censorship business in a huge way.” (Concerns over relationships between former trust and safety executives at Twitter and officials in the FBI and other federal agencies are shared by some on the left.)

The R Street Institute, a center-right think tank that advocates for free market principles, hosted an event on January 24 to discuss these issues and debated, in particular, how to craft transparency policies to remedy them. Sen. Cynthia Lummis (R-WY), co-sponsor of a 2021 bill to require platforms to disclose any government requests or recommendations for content moderation, delivered opening remarks, laying out the concerns of many conservatives regarding government engagement with social media platforms and possible interference in online speech.

The discussion was moderated by Shoshana Weissmann, Digital Director and Fellow at the R Street Institute, and panelists included:

The inspiration for the panel, Weissmann explained, was a tweet by Chilson last October recommending that Twitter, under its new owner, and other platforms disclose all government “requests for content moderation or user discipline from governments or government officials,” and Sullivan’s responses linking to the disclosures that Meta and Twitter already make in this regard. The discussion broached a variety of issues, ranging from the panelists’ assessment of the status quo, the need for greater transparency, and what complications may reasonably preclude it. Four key topic areas are summarized with lightly edited excerpts below.

1. Why governments get involved in content moderation

The speakers agreed that there are cases where governments may have good reason to discuss content moderation with social media platforms, and explored the temptation for officials around the world to reach out to centralized gatekeepers in order to inappropriately control public discourse.

Sen. Lummis:

The government needs to be very careful about how they wade into regulating social media platforms, so it’s not just stifled free speech—but that’s not an excuse for taking no action at all [against serious harm perpetrated via social media].

Emma Llansó:

We’ve always had a focus on this role of intermediaries ... as potential wonderful supporters of free expression. It’s a low barrier to entry to be able to post content on social media, or, before that, message boards, or run your own blog, or send out an email newsletter. ... But they can also be targets as potential failure points or gatekeepers for speech that especially governments both in the US and especially around the world, will target when they see someone speaking and they can’t get that person directly, but they want to silence that person.



...



One thing I think we should be clear about: it’s not always bad for government and companies to be talking to each other. There are times, whether it’s natural disasters, or trying to get the best information about COVID-19 available to people or accurate information about how elections are run. Often, government is a really important source of information that online services are trying to help get out to people, and use to inform their content moderation.



And there might be things that happen that are not for public consumption but are more about cybersecurity threats or other kinds of real threats to the structures and the systems that the companies are running that I think we do actually want government and companies to be able to talk to each other and get that information.

Neil Chilson:

I would be remiss as a huge fan of emergent order and bottom-up solutions, not to point out that this entire problem is because we have a quite centralized approach to content moderation, which has its benefits. But this is definitely one of its downsides, when you have a central decision maker who is choosing how content moves in a system, you’re giving up a lot of the local knowledge that the individuals have about the context in which they’re saying things. And it’s a pretty nice target for people who want to pressure a media ecosystem.

4. The transparency challenge for platforms

Major platforms release public reports (and notify individual users) of at least some government requests to remove content. However, governments make different kinds of requests, including confidential legal demands, flagging specific content that may violate platform policies, and more informal pressure, which greatly complicates the task of compiling a useful transparency report. In addition to data, the panel also recommended that platforms disclose their standing relationships with governments, and the associated policies that they use internally to respond to government requests.

Emma Llansó:

We’ve seen government pressure on intermediaries play out in a lot of different scenarios around the world, from extremely overt kinds of pressures, like a government blocking access to an entire website because, for example, YouTube won’t take down a certain video and Pakistan blocks them entirely—we’ve seen governments focus on threatening or even arresting and detaining staff of companies including in India and Russia as a way to pressure companies into taking stronger action against user content. And we’ve also seen spreading across Europe more formalized programs where law enforcement will contact companies about the kinds of content on their services that might violate the company’s terms of service, even if it’s not necessarily illegal under that country’s laws, and certainly without going through any kind of court process to get that content taken down.

Kaitlin Sullivan:

When it comes to disclosure for users, what we [Meta] already do is notify them in almost every case when their content is removed for violating our community standards, so for violating the rules that we set for our platform that exist everywhere globally regardless of local law. And then the second thing that we do, and this is something that we’ve really invested in a lot, and I’ve seen change a lot in my 10 years, is notify users when their content was removed or restricted in a particular jurisdiction on the basis of a formal government report, which is generally that that content violates the local law. And we are always trying to get better and more granular about what that notice is. I think we’d love that notice to be: here is exactly who requested this; under what law they did; and your process for exercising your due process, to the extent it exists in your country, against that.







We also disclose basically both of those things at the macro aggregated level. So you can go look at our transparency report for government requests for content restriction based on local law or based on things that are not our standards where we would not have removed it under our own standards—we will always do that review first ... And then, separate to that, we have our transparency report, which is what do we do under our policy.



...



What Neil’s tweet was kind of asking for and what actually the Oversight Board [the quasi-independent body convened by Meta to advise it on content moderation decisions and policy]... is asking for is: what about the overlap between those two things, when the government sends you a request and it is because it violates your policies, or you take action on it because it violates your policies? That is something that we are working on putting together.



...



Some of the reasons why [reporting government-influenced content moderation] is a little bit complicated can be in terms of knowing when a government request comes in: we have formal channels, [but] anyone on Facebook can also report using our tools. There have been discussions on this panel of other [less formal] channels, so being able to aggregate that, being able to really organize the data in a meaningful way to share it [are] some of the things that make that a little bit trickier to know.



...



There are countries that give us legal orders and then have gag orders that come with those for the companies where they cannot disclose to the user or to the public what the request was and who it was from or why. Some of those are for really important national security reasons. Some of those are for reasons that I’m sure folks on this panel would like to have more scrutiny and oversight over.



And then there are also interesting questions about when someone is acting in their official capacity in this: if Shoshana is a Hill staffer and she had an issue logging into her account because someone hacked it and reaches out to a contact at Meta, that’s probably not in her official capacity; if she’s reaching out on behalf of a constituent because ... they called up their congressperson ... is that in their official capacity? And how do we make those differentiations?

Neil Chilson:

I think it’s great that Meta’s working on that specific overlap ... for a couple of reasons:



One: policies, when they are broadly applicable, ... it will matter a lot depending on who’s reporting the content. ... It’s like if you had a speed trap where ... everybody is going above the speed limit, but you only reported [those affiliated with] a certain political party, right? Yeah, sure everybody’s violating the law, but you’re just picking out the ones that you want and that can have a biased effect. And so, I’m glad that you guys are working on that.



And the potential vagueness of some policies pushes us towards a need for more disclosure about when the government is achieving or seeking to achieve certain goals by helping the companies interpret those policies and who they apply to. So I think it’s great that the companies are moving in that direction and I can’t wait to see more of that.

Emma Llansó:

Often the first thing that a company does when they get a court order in, or other kind of legal demand is look at their Terms of Service first, because Terms of Service apply globally on the platform, they’re the set of rules that the company is more familiar with as far as enforcing, rather than the specific law necessarily in one of hundreds of countries around the world. And so, a lot of times, legal demands end up not necessarily being complied with as the company saying, “We consent to the jurisdiction of this legal actor in this country and agree on the interpretation of the law, and that’s why we’re taking this content down”. It’s much easier for them to say, “Oh, this violates our policy against such and such.”



...



And that’s a dynamic that I think is really important for users to understand because governments are getting savvier about playing on that, about figuring out that. Because ... there are often content policies that are more restrictive than what could be restricted under law, ... that can be a way for governments to use those maybe broader or ... more specific rules ... to target content that the government wouldn’t be able to pursue under law in a constitutional or due process proceeding.



...



When you can look at a chart of how a company has responded to court orders and see, “Oh, okay.” 20% of the court orders from Turkey were complied with as a matter of law, but in practice, 90% of the time when a court order came in, content came down because the government was just getting good at targeting content that violates terms of service, [t]hat looks like a very different relationship between that company and that government.



There is a case back in 2021 from the Israeli Supreme Court that laid [the accountability aspect] out really clearly. Israel has what they call their cyber unit, which ... send[s] referrals of content to social media services. Activists in Israel were trying to bring a legal claim against the cyber unit ... to say, “This government body is impermissibly targeting our speech because of the content of the speech." ... Because the users in that case did not have any kind of notice or information from the companies about the fact that the report that ultimately led to the company reviewing their content and taking it down came from the cyber unit, the court said they didn’t have standing. They said, “You can’t demonstrate that the government was even involved in this company applying their Terms of Service against your content.”



...



A company is going to want whatever ... leads that they can get on content that might violate their policies to come in through whatever channel they can get. ... I care more about the user and what’s happening to the actual person and their relationship with their government. ... There really are these three different actors in these scenarios and that while it might be understandable for the company to want to use these sorts of reports as part of how they evaluate content, that’s not the entirety of the interaction that’s happening. ... Empowering users to understand that what they have said online has caught the attention of a government official somewhere is a really crucial aspect of all of this.



...



One of the things that we call for in the Santa Clara principles ... as a starting point to hear from companies: what are their general theories and approaches to engaging with government actors? And if there are formal procedures, or processes, or working groups, or whatever it is where they are regularly in contact with government, disclose those, explain those, give users a sense in general of how this company interacts with government.



And I think for a company like Meta that’s going to be a lot of complicated answers with a lot of different dimensions to it. ... What does it look like to at least identify all of the multi-stakeholder working groups you’re in, or the different ways that on subject matter you might approach interacting with government?



...



Even if we, as the people, can’t know all of the details of confidential information that’s being exchanged, we should know that it’s happening and know that there are some clearly articulated limits such as individual information about user accounts is not being shared, but general information about threats that a company is seeing on their service might be shared. Even as a starting point, getting a general description of that sort of information would move us so far into the conversation and getting some of this out in the light.

Neil Chilson:

As far as the level of transparency that would be useful: ... the level necessary to incentivize government employees to properly weigh and balance whether or not the ask is worth the cost. ...[R]ight now, it’s basically costless as far as I can tell, and that cost should go up a little bit for government agencies.

Kaitlin Sullivan:

Meta has one paragraph on [our interactions with governments]. I think Emma is right to say the process is probably more than one paragraph long, and I can share briefly what we’ve committed publicly to our Oversight Board to share around government requests for removal under our own policies. We’re going to start with the number of unique requests we receive; pieces of content covered by the requests; such pieces of content removed under our policies; such pieces of content that might be locally restricted under local law (sometimes the requests come in for both-and); such pieces where no content is taken ]down]. I think that’s a starting point.

4. Transparency from the government

Sen. Lummis mentioned in her opening remarks that even though her bill laid the responsibility for transparency with the platforms, an acceptable alternative approach would be to require the government to disclose any discussions with platforms regarding content moderation. The panel discussed a few aspects of approaching the problem from the government side, at least within the US.

Emma Llansó:

There’s long been a call from civil society groups in the US and around the world for more transparency ... from tech companies ... but also transparency from the government actors themselves. And this is a place where I think especially the United States could really stand to show some leadership worldwide about being more transparent ... about when they’re actually having contact with social media companies and making these kinds of requests.

Neil Chilson:

There’s plenty of things that government can do to require employees to respect the civil liberties of the citizens by not calling for their content to be taken down. ... I think there’s a lot of legislative proposals, but there’s a whole host of things, everything from congressional rules to just updating ethics for federal employees for example. There are a lot of options for government, for congressional members who are concerned about this to push government to be much, much more transparent about these types of efforts to shape dialogue on the social media platforms.



...



As somebody who used to answer FOIA requests, the potential of transparency is a deterrent effect on government actors. And so, if they know that when they make a request [about content moderation to a platform], it’s going to be logged by the company or it’s going to be logged by their agency under some requirement, they’re very careful when they make those requests. ... I think that getting the incentives right on the government side is super important. And I think in the US we have a shot at that.

Emma Llansó:

It might be a bit much to expect that Russia, or Turkey, or China will provide that transparency, but we could expect better of the US government. We could expect better of a lot of governments around the world. Yes, we’re never going to necessarily know for sure that a flag that comes in through a standard user reporting portal comes from a government official who’s sitting at home on their couch in their sweatpants on a Friday night off the clock. But there sure as heck could be rules at the place where that government official works about whether that’s appropriate behavior for that official to engage in.



...



Getting agencies to articulate their policies ... and ... approaches are, what their standards and limits are, would be an enormous step forward.

4. Government-moderated areas on privately-owned platforms

In some situations, such as on Facebook pages or official groups, a government entity has some level of direct control over content moderation, such as the ability to delete or hide certain comments or restrict access to the page or group. In some respects, this makes the issues more clear-cut, but it also introduces distinct complications.

Kaitlin Sullivan:

There are also places where governments can directly moderate content on platforms. So if a congressperson has a page on Facebook or whatnot ... they can moderate their own comments directly. And I think some of the offices share the policies under which they do that and some of the offices don’t. And I think there are a variety of stances that ... the different US congresspeople’s offices have on how to deal with comments on their own pages or in response to their own tweets.

Emma Llansó:

Some developing First Amendment case law on that as well: when an official has a space on a social media service, is that a public forum where they are limited in how they can suppress user speech on their little slice of Facebook or Twitter or any of these services?

Neil Chilson:

The one nuance there, I think, is that at least when that moderation happens, you might not know why, but you know who. I think the big difference is if that senator or that government official goes and talks to a platform and says, “Hey, can you take down this content?” And then indirectly, you’re not quite sure why it went down, you certainly don’t associate it necessarily with some official action. ... To me, the biggest problem with all of this is that when stuff like this eventually comes out or when you see it and you experience it, it undermines trust both in the platform and in the government. And we’re in a situation in this country where we definitely could use more trust in some of these institutions, not less trust.

- - -

Though some of Sen. Lummis’s motivating claims about political bias on the part of the platforms may be disputable, her overarching concern about opportunities for the government to quietly interfere in the content moderation process is shared by many across the political spectrum. And the approach she endorsed of increasing transparency, if not executed with much nuance in the bill itself, seems reasonable. On the panel, it was notable for a group representing substantially different perspectives to reach broad consensus around this proposition, especially on a topic that is the locus of such partisan acrimony.

It would be instructive to hear from a government representative in, perhaps, a law enforcement, diplomacy or intelligence role, to explain where they believe transparency would be useful and where its limits are, in order to see how their view might differ from the presenters at this event. Relatedly, several panelists acknowledged that there must be some exceptions in security contexts, though where to set the boundary between what is and is not a security issue is often debatable.

Another potential obstacle that was not discussed is how privacy concerns may put limits on the level of detail in transparency reporting (a point acknowledged in a recent CDT report on transparency reporting co-authored by Llansó): Just because a user did not want their content removed, does not mean they want it publicized in a report. Absent this, we would have to rely on the government and/or the platform to classify the content.

Though the Supreme Court appears to have delayed ruling on active cases regarding requiring social media platform transparency about content moderation, there are some concerns regarding the constitutionality of requiring the reports in US law. As we saw, there is potential support for moving transparency obligations to the government side instead, though other countries may be more resistant to establishing similar policies. (Outside the US, these issues are only becoming more pressing, both in authoritarian regimes like India, which recently demanded the removal of a BBC documentary critical of PM Narenda Modi from Twitter and YouTube, and with respect to the regulatory regimes in formation in the EU under the Digital Services Act, and proposed by the UK’s Online Safety Bill.)

But despite these and doubtless other hurdles, this does appear to be an issue where, if politicians can resist the temptation to transform it into a partisan or anti-Big Tech cudgel, we may see important developments in the coming years. Not only can transparency about these relationships reveal and reduce abuses by governments worldwide, but it may even dispel some fears of improper government interference in cases where little to none exists.

Authors

Tim Bernard
Tim Bernard is a tech policy analyst and writer, specializing in trust & safety and content moderation. He completed an MBA at Cornell Tech and previously led the content moderation team at Seeking Alpha, as well as working in various capacities in the education sector. His prior academic work inclu...

Topics