Home

Donate

First Amendment Defenders and the Supreme Court Should Reject the Jawboning Bogeyman

Dean Jackson / Feb 22, 2024

The author thanks Faiz Thakur and Fabiola Uwera, Georgetown Law School student externs with the Tech Justice Law Project, for their assistance with this piece.

The Supreme Court in Washington, DC. Shutterstock

In the early days of the Biden administration, the Capitol still smelled of insurrectionary smoke. The National Guard manned barricades against another potential attack. Thousands of Americans were dying each day from a pandemic disease that would claim hundreds of thousands more over the following months. Many of these deaths were preventable: a vaccine might have stopped them, but online conspiracy theories led many citizens to decline the life-saving treatment.

The United States Fifth Circuit Court of Appeals writes that during these dark days, “a group of federal officials” was “in regular contact with nearly every major American social-media company about the spread of ‘misinformation’ on their platforms.” In its telling, those officials “coerced, threatened, and pressured” social media platforms to censor “disfavored viewpoints” in order to slow the spread of online misinformation. And, the Fifth Circuit writes, “the platforms seemingly complied.” The allegedly disfavored users brought a lawsuit against the officials, claiming their First Amendment rights had been violated.

For Americans who treasure free expression, this is a frightening tale. It is also a tall one.

This case, Murthy v. Missouri, will hear oral arguments before the United States Supreme Court on March 18th. The Fifth Circuit claims that “the Supreme Court has rarely been faced with a coordinated campaign of this magnitude orchestrated by federal officials that jeopardized a fundamental aspect of American life.” It concurs with an earlier ruling that the government’s actions “had the intended result of suppressing millions of protected free speech postings by American citizens.” And for the most part, it upholds an earlier injunction forbidding the White House and several federal agencies from interacting with social media platforms except in strictly limited circumstances.

As the Supreme Court weighs whether or not this is an accurate portrayal of events, it should re-examine the Fifth Circuit’s review of the evidence closely to see how well it corresponds to reality. The more serious the distortion of the evidence, the more First Amendment advocates should stop to ask if the threat of government jawboning–perhaps hypothetical in this instance–is a distraction from more imminent dangers.

Persuasion or Coercion?

Persuasion is an essential function of a democratically elected government. Elected officials must be able to speak persuasively to the public for democracy to function, and the government’s authority and democratic legitimacy give it a unique role in persuading powerful private entities to act for the public good. Legal precedent draws a line between this persuasive function and state coercion, or the improper threat of state action to suppress protected expression. But this line is blurry, and as legal scholars note, Murthy v. Missouri is an opportunity to sharpen it.

Another question at play is whether or not private, third-party actors can become agents of state coercion when they receive “significant encouragement” from the government. Considering this possibility, an injunction issued by the district court on July 4, 2023, implicated independent researchers at the Stanford Internet Observatory (SIO) and two projects in which SIO participated: the Election Integrity Partnership and the Virality Project. It barred the government from “collaborating, coordinating, partnering, switchboarding, and/or jointly working with” with these researchers “or any like project or group.”

Stanford filed an amicus brief arguing that the researchers are “not government actors” and that “the district court’s findings of censorship… rest on numerous factual errors about basic details.” The Fifth Circuit’s later ruling validated this argument, vacating portions of the injunction affecting these researchers because they “may implicate private, third-party actors that are not parties in this case and that may be entitled to their own First Amendment protections.”

But other portions of the injunction stood. Initially, the district court restricted six Federal government actors from communicating with social media platforms in all but a vaguely defined set of exceptions: the White House (including the Surgeon General), the Centers for Disease Control and Prevention (CDC), the Federal Bureau of Investigation (FBI), the National Institute of Allergy and Infectious Diseases (NIAID), the State Department, and the Cybersecurity and Infrastructure Security Agency (CISA). The later ruling spared only the last three from the initial injunction’s prohibition, and a revised ruling reinstated the restrictions on CISA.

Weighing the Evidence

But, as with the injunction against Stanford, the Fifth Circuit’s conclusions regarding the Federal government are erroneous. They rest on cherry-picked evidence, flawed analysis, and misunderstandings about the internal workings of social media companies.

In weighing the difference between persuasion and coercion, the Fifth Circuit presents snippets of email exchanges between government officials and social media platforms. The arrangement of these snippets tells a story of furious government officials and browbeaten platform staff. Because none of them are cited to source documents, it is difficult for a casual reader to put them in context to see if that story is true. But the quotes can be traced back to longer, publicly released email exchanges that show a bigger picture.

Consider this excerpt from the ruling:

…Once White House officials began to demand more from the platforms, they seemingly stepped-up their efforts to appease the officials. When there was confusion, the platforms would call to “clear up” any “misunderstanding[s]” and provide data detailing their moderation activities. When there was doubt, they met with the officials, tried to “partner” with them, and assured them that they were actively trying to “remove the most harmful COVID-19 misleading information.” At times, their responses bordered on capitulation. One platform employee, when pressed about not “level[ing]” with the White House, told an official that he would “continue to do it to the best of [his] ability, and [he will] expect [the official] to hold [him] accountable.”

Then, later:

In another message, an official sent Facebook a Washington Post article detailing the platform’s alleged failures to limit misinformation with the statement “[y]ou are hiding the ball.” A day later, a second official replied that they felt Facebook was not “trying to solve the problem” and the White House was “[i]nternally… considering our options on what to do about it.”

As presented, the interactions described here seem heated, threatening, even malevolent. They tell a story of angry missives from the government met with near-total compliance by platforms. But the full exchange is discoverable thanks to the testimony of former Missouri Deputy Attorney General Dean John Sauer to the House Select Subcommittee on the Weaponization of the Federal Government. Sauer included with his testimony several exhibits that appear to be drawn directly from discovery in Murthy v. Missouri. A close reading of these shows a more nuanced version of events, one in which White House officials appear more reasonable and platforms less timorous.

“Hiding the Ball”

The “hiding the ball” email was sent by White House Digital Director Rob Flaherty on March 14th, 2021, after he had received several communiques from Facebook on its efforts to respond to vaccine-related rumors during the COVID-19 pandemic. “We wanted to make sure you saw our announcements today about running the largest worldwide campaign to promote authoritative COVID-19 vaccine information and expanding our efforts to remove false claims on Facebook and Instagram about COVID-19, COVID-19 vaccines, and vaccines in general during the pandemic,” wrote a Facebook executive in a February 8 email which included a summary of the company’s public announcement.

Timeline troubles aside, Flaherty responded, asking Facebook to clarify the circumstances around when it would remove posts. “Is there a strike policy, ala YouTube? Does the severity of the claims matter?” He followed up the next day in a second message, asking about new policies around the promotion of civic and health-related Facebook groups:

All, especially given the Journal's reporting on your internal work on political violence spurred by Facebook groups, I am also curious about the new rules as part of the "overhaul." I am seeing that you wiII no longer promote civic and health related groups, but I am wondering if the reforms here extend further? Are there other growth vectors you are controlIing for?

Happy to put time on the calendar to discuss further.

Facebook responded with explanations and an offer to schedule a meeting. This is the end of the exchange in the Sauer exhibits until March 14th, 2021, when Flaherty sent an email with the subject line, “You are hiding the ball.” The body text contained only a link to a Washington Post article titled “Massive Facebook study on users’ doubt in vaccines finds a small group appears to play a big role in pushing the skepticism.” Facebook responded by offering a call to clear up a “misunderstanding on what this story is covering” and sharing an announcement that was mentioned “on Friday’s call” (indicating that non-email communication continued between the February and March email exchanges in the Sauer exhibits).

Flaherty wrote back:

I don't think this is a misunderstanding… I've been asking you guys pretty directly, over a series of conversations, for a clear accounting of the biggest issues you are seeing on your platform when it comes to vaccine hesitancy, and the degree to which borderline content–as you define it–is playing a role. I've also been asking for what actions you have been taking to mitigate it as part of your "lockdown"–which in our first conversation, was said to be in response to concerns over borderline content, in our 1:1 convo you said was not out of any kind of concern over borderline content, and in our third conversation never even came up.

You said you would commit to us that you'd level with us. I am seeing in the press that you have data on the impact of borderline content, and its overlap with various communities. I have asked for this point blank, and got, instead, an overview of how the algorithm works, with a pivot to a conversation about profile frames, and a 45-minute meeting that seemed to provide you with more insights than it provided us.

I am not trying to play "gotcha" with you. We are gravely concerned that your service is one of the top drivers of vaccine hesitancy–period. I wiII also be the first to acknowledge that borderline content offers no easy solutions. But we want to know that you're trying, we want to know how we can help, and we want to know that you're not playing a shell game with us when we ask you what is going on.

This would all be a lot easier if you would just be straight with us.

Facebook’s representative responded, writing that “We obviously have work to do to gain your trust,” We are… working to get you useful information that’s on the level,” and “I’ll expect you to hold me accountable.”

Copied on these emails was Andrew Slavitt, a senior advisor to the President for COVID-19 response, who wrote:

I appreciate being copied on the note. It would nice [sic] to establish trust. I do feel like relative to others, interactions with Facebook are not straightforward and the problems are worse–like you are trying to meet a minimum hurdle instead of trying to solve the problem and we have to ask you precise questions and even then we get highly scrubbed party line answers. We have urgency and don't sense it from you all. 100% of the questions I asked have never been answered and weeks have gone by.

Internally we have been considering our options on what to do about it.

On March 19, the two parties discussed the exchange by telephone. On Sunday, March 21, Facebook followed up with a summary of that call and next steps. The company would provide a consistent point of contact for the company’s product team. The company offered additional information about previously approved policies being implemented to reduce “the virality of content discouraging vaccines that does not contain actionable misinformation.” It also described similar policies for WhatsApp. Flaherty responded, “Awesome, [name redacted]... As I’ve said: this is not to play gotcha. It is to get a sense of what you are doing to manage this. This is a really tricky problem. You and I might disagree on the plan, but I want to get a sense of the problem and a sense of what you [sic] solutions are.”

Exchanges of this nature continued into April. The Fifth Circuit cites the frequency of communication as concerning, but given the ongoing public health emergency, it is reasonable to argue they were warranted. The White House would request data; Facebook would provide a briefing; the White House would complain that the briefing did not answer their questions about viral content, bordering on a policy violation, promoting vaccine skepticism.

When a video of former Fox News host Tucker Carlson discussing vaccine side effects was reported by Crowdtangle (a Facebook-owned data analysis tool) as the number one post on Facebook, the company responded that the video did “not qualify for removal under our policies… that said, the video is being labeled with a pointer to authoritative COVID information, it’s not being recommended to people, and it is being demoted.” While the Fifth Circuit does not specify the Facebook official implicated in this exchange, the Sauer documents make clear it is Nick Clegg–then Facebook Vice President for Public Affairs and former Prime Minister of the United Kingdom, hardly a figure who would likely be cowed by government pressure.

“They’re Killing People”

The Fifth Circuit writes that by July, officials’ frustrations had “reached a boiling point.” In a joint press conference, the White House Press Secretary and the Surgeon General publicly called on the platforms to do more to confront vaccine misinformation and to “operate with greater transparency and accountability.” The next day, President Biden spoke on the same subject. Claiming that “the only pandemic we have is among the unvaccinated,” he blasted social media platforms for not doing more to contain vaccine misinformation. “They’re killing people,” he said bluntly.

The Fifth Circuit places great emphasis on the President’s comments. In particular, it notes that “a few days later, a White House official said they were ‘reviewing’ the legal liability of platforms—noting ‘the president speak[s] very aggressively about’ that—because ‘they should be held accountable.’” In response, the Fifth Circuit alleges that “the platforms responded with total compliance.”

But these were public statements, not private threats. When distinguishing between coercion and persuasion, the forum in which an utterance occurs matters. Like most elected officials, an essential part of the President’s job is to communicate their views to the public and to rally support for their agenda. Biden’s statements were made publicly to the press. Were they an act of coercion, or was he, like Senator Warren in the Kennedy case, “calling attention to an important issue and mobilizing public sentiment?”

As for the unnamed official who raised the prospect of reviewing platforms’ legal liability, this was White House Communications Director Kate Bedingfield, speaking to MSNBC. The fact these statements were made to the press weakens the argument that they were coercive threats: those can be made in private. Appeals to the public cannot be made privately.

The situation might be different if the public statements were accompanied by a sharp private missive, but instead, Facebook reached out proactively to discuss the President’s comments. When the Fifth Circuit writes that “Facebook asked what it could do to ‘get back to a good place’ with the White House,” it is drawing from an email to White House Senior Advisor Anita Dunn on July 17, 2021, with the subject line “hoping to connect.” While Facebook might have anticipated consequences from the President’s anger, the relationship to a direct threat of action from the White House is less clear than the Court portrays.

A Flimsy Jawbone

In the case of Kennedy, Jr. v. Warren, the Ninth Circuit Court of Appeals applied a series of tests to determine if a public letter from Senator Elizabeth Warren, a prominent critic of tech companies, was an illegitimate attempt to coerce Amazon to suppress a book about COVID-19 by prominent vaccine-skeptic Robert F. Kennedy Jr. How do Flaherty and Slavitt fare against the same tests?

The first test in Kennedy, Jr. v. Warren is the “word choice and tone” of the alleged threat. The Fifth Circuit describes the platforms as capitulating to government demands; on the contrary, the full context of Flaherty’s exchange with Facebook shows a private company cautiously explaining and applying pre-existing policies, sometimes in defiance of the government’s desires, and government frustration at the lack of clear answers from the company. On multiple instances, Flaherty acknowledges the difficulty of platform efforts to control borderline content. If the tone is sometimes frustrated, it is more often cordial and professional. The closest thing to a coercive threat in the exchange above is probably Slavitt’s warning that “[i]nternally we have been considering our options on what to do” about Facebook’s perceived lack of urgency. If there is a threat, it is implicit and vague; still potentially coercive, but with room for interpretation.

Another test in Kennedy held that threats must come from actors with the ability to carry them out. A threat of regulatory action from the local dogcatcher would only qualify in very narrow circumstances related to animal control. The White House is a different creature. Of the actors named in Murthy v. Missouri, it has the most power and authority to make and carry out threats against platforms. But even this power is limited. Absent Congressional approval, the White House cannot, for example, unilaterally repeal Section 230 of the Communications Decency Act, which shields platforms from liability from user posts in all but a few very narrowly defined cases. In Kennedy, the Court ruled that Elizabeth Warren, “a single Senator, [has] no unilateral power to penalize Amazon.”

Likewise, Warren’s letter, like Flaherty and Slavitt’s emails, did not reference a specific adverse threat (another test applied by the Fifth Circuit). If this test is questionable for the White House, it is more so for less powerful actors like the CDC or, indeed, for the Surgeon General, whose job is to be a public scold.

The last remaining test asks whether or not the recipient perceived the official’s statements as threatening. If platforms did so, it is not apparent from their exchange with Flaherty. The policies under discussion were not instituted at the White House's insistence. Many pre-date the Biden administration and began shortly after the COVID-19 pandemic; this is an obstacle to any claim that they emerged from coercion by the Biden White House.

Regarding specific accounts or posts flagged by the FBI and other government agencies, the Fifth Circuit admits that platforms took action on reports from law enforcement only about fifty percent of the time. The Stanford amicus brief notes that for the Election Integrity Partnership, the number was only thirty-five percent. These numbers do not suggest “total compliance” to a perceived government threat.

“I’ve had the somewhat surreal experience of learning that my decisions are not my own”

The Fifth Circuit ruling contains only three short paragraphs dedicated to the FBI. It alleges that “Per their operations, the FBI monitored platforms’ moderation policies, and asked for detailed assessments during their regular meetings. The platforms apparently changed their moderation policies in response to the FBI’s debriefs,” particularly around “hack and dump” operations. The FBI also “targeted domestically sourced ‘disinformation,’ like posts that stated incorrect poll hours or mail-in voting procedures.” The ruling contains no quotations from communications between the FBI and social media companies. Instead, it appears to be based on the deposition of FBI agent Elvis Chan, summarized in the memo accompanying the initial July 4 injunction.

Much of the discussion of Chan’s deposition revolves around “hack and dump” (or “hack-and-leak”) operations, especially the October 2020 release of materials allegedly taken from a laptop belonging to President Biden’s son, Hunter. According to the memo,

Social-media platforms updated their policies in 2020 to provide that posting “hacked materials” would violate their policies. According to Chan, the impetus for these changes was the repeated concern about a 2016-style “hack-and-leak” operation. Although Chan denies that the FBI urged the social-media platforms to change their policies on hacked material, Chan did admit that the FBI repeatedly asked the social-media companies whether they had changed their policies with regard to hacked materials because the FBI wanted to know what the companies would do if they received such materials.

Because the Hunter Biden laptop has become a source of scandal and conspiracy theories, it is important to note here that these policy changes pre-date the initial public reporting on its existence and the contents of its hard drive. The FBI and social media companies had good reason to worry about foreign state actors using hacked materials to influence the 2020 election: they had, after all, already done so in the 2016 election and again in the 2017 French elections.

When the New York Post reported on the laptop’s contents weeks before the 2020 Presidential election, Facebook and Twitter, believing mistakenly that the quoted materials might have been the result of a foreign hack-and-leak operation, took steps to limit the story’s reach. In testimony before the House Oversight Committee, Yoel Roth, Twitter’s former head of site integrity, called the decision a mistake but denied government involvement in it.

Roth further described his interactions with federal officials in an essay for the Knight First Amendment Institute. “Over the last few months,” he writes, “I’ve had the somewhat surreal experience of learning that my decisions are not my own.” He worries that “the factual foundation” of the Fifth Circuit's ruling is “flawed” and later asserts that “[t]he FBI fastidiously… avoid[ed] both assertions that they’ve found platform policy violations, and requests that Twitter do anything other than assess the reported content under the platform’s applicable policies.”

In other words, law enforcement told platforms to do what they wanted with the information provided.

The Fifth Circuit Ruling Betrays Poor Understanding of Platform Trust & Safety

Alarmingly for a decision of such gravity, the Fifth Circuit’s writing betrays its ignorance about the work of platform trust & safety teams. Firstly, on important matters, including national security, election integrity, and public health, the private and public sectors rely on one another for information and operational capacity. Roth elaborates on this working reality in his essay, arguing that communication between the government and social media companies is essential.

The Fifth Circuit does not grapple with this at all. Rather, Roth writes that:

The Fifth Circuit appears to hold that any information-sharing from the FBI to platforms is de facto coercive, simply by virtue of who it originated with. Even in situations where platforms actively solicit information from the government—as was regularly the case on election security matters between platforms and the FBI—the resulting communications are taken to be coercive because of the FBI’s standing as a law enforcement entity.

A similar argument applies to communications between platforms and expert bodies like the CDC and NIAID: it is not only permissible but desirable for major communications platforms to exchange information with leading public health experts, who can help companies formulate their policies based on the most recent data and research available.

But once again, the Fifth Circuit distorts this reality:

Ultimately, the CDC’s guidance informed, if not directly affected, the platforms’ moderation decisions. The platforms sought answers from the officials as to whether certain controversial claims were “true or false” and whether related posts should be taken down as misleading. The CDC officials obliged, directing the platforms as to what was or was not misinformation. Such designations directly controlled the platforms’ decision-making process for the removal of content. One platform noted that “[t]here are several claims that we will be able to remove as soon as the CDC debunks them; until then, we are unable to remove them.”

But anyone familiar with platform policymaking can, after a moment’s thought, see this for what it is: private companies cautiously trying to base their policy decisions on evidence in the midst of a public health emergency. Aren’t exchanges like this part of the reason the government employs medical experts? Does free expression require the private sector to make decisions in a vacuum?

Another misunderstanding is that at several points, the Fifth Circuit conflates the removal of content with the demotion of content. When the Fifth Circuit writes that…

Even when the platforms did not expressly adopt changes… they removed flagged content that did not run afoul of their policies. For example, one email from Facebook stated that although a group of posts did not “violate our community standards,” it “should have demoted them before they went viral.” In another instance, Facebook recognized that a popular video did not qualify for removal under its policies but promised that it was being “labeled” and “demoted” anyway after the officials flagged it.

…it confuses two different types of content moderation. As shown in the email exchange with Flaherty above, the platform policy is to reduce the distribution of “borderline” content that comes close to violating a policy but does not qualify for removal. That policy pre-dates the Biden administration; similar policies were in place, for example, in the run-up to the 2020 election and the January 6th insurrection. It is troubling that in making allegations of coercion, the Fifth Circuit cannot distinguish between exceptions to policy and application of existing policy.

Avoiding the Trap of Credulity

The Fifth Circuit’s ruling is heavily informed by partisan activists’ shared delusion that they toil under the oppressive thumb of a censorial state. That the Supreme Court might give credence to their paranoia should worry defenders of the First Amendment.

As stated above, the documents uncovered in discovery during Murthy v. Missouri (then known as Missouri v. Biden) were also provided by conservative attorney Dean John Sauer as part of his testimony to the Weaponization Subcommittee. Sauer was formerly Missouri’s Solicitor General and, later, Deputy Attorney General for Special Litigation; more recently, he represented President Trump in hearings on presidential immunity in United States of America v. Donald J. Trump. As Yoel Roth notes, many of the same claims and narratives related to government communication with platforms can be traced back to the Twitter Files provided by Elon Musk to a small group of independent writers in late 2022 and early 2023.

Murthy v. Missouri is best understood not as an academic exercise in identifying government coercion, but as part of a political campaign to end content moderation as we have known it. To take the plaintiff’s claims at face value and ponder high-minded legal questions detached from the political reality is to fall into a trap of credulity.

In a way, this campaign has succeeded no matter how Murthy v. Missouri is decided. Through subpoenas, partisan press coverage, online harassment, and legal injunctions, it has generated a chilling effect that is already felt across the country. Even though the Fifth Circuit vacated the injunction against it, the State Department has canceled meetings with platforms about foreign influence operations. Other government efforts to work with platforms to confront this threat have also stalled. Nonprofits have seen grant money for counter-disinformation projects pulled. The researchers who were initially barred from communicating with the government by the district court’s since-overturned injunction have been burdened by Congressional subpoenas and public records requests alleging they are part of a vast censorship apparatus. Yoel Roth fled his home after an onslaught of online threats.

Meanwhile, the costs of election denial continue to mount, and their consequences now imperil future elections. Threats and harassment are driving election administrators out of the workforce; an amicus brief filed on their behalf describes the threats they receive as “shockingly graphic,” giving examples such as, “I KNOW WHERE YOU SLEEP, I SEE YOU SLEEPING. BE AFRAID,” and “EVERYONE WITH A GUN IS GOING TO BE AT YOUR HOUSE-AMERICANS [sic] LOOK AT THE NAME- [sic] ANOTHER JEW CAUGHT UP IN UNITED STATES VOTER FRAUD.” (Capital letters in original.)

Eight secretaries of state filed an amicus brief of their own, arguing that “forcing social media platforms to block all direct contact with government officials… will increase the risk that dangerous, and even illegal, falsehoods about elections and voting will spread unchecked.” And indeed, these lies have continued to spread since the 2020 election, leading to new waves of voter suppression. In some districts, election officials are stocking up on tourniquets and barricades and teaching staff how to defend themselves.

But perhaps the ultimate costs will come if President Trump retakes the White House in 2024, escaping accountability for disrupting the peaceful transfer of power and achieving his ambitions of purging the civil service and ending the independence of the Justice Department. Those threats seem more imminent and dangerous to democracy than any posed by the jawboning alleged in Murthy v. Missouri.

Legal scholars are right to point out that judicial standards around government coercion could be more explicit and that constructive legislation would help the government stay within permissible bounds. There are real causes for concern: it is easy to imagine past and future administrations exerting pressure on platforms in clearly coercive ways. Internationally, in India and elsewhere, the consequences of unchecked government coercion of social media platforms are playing out in real-time. To say that First Amendment advocates are over-focused on jawboning is not to deny that there are real risks and open legal questions at hand.

But advocates should also pause to consider the bigger picture. Is there anyone in the United States today who hasn’t encountered vaccine skepticism or denials about the outcome of the 2020 Presidential election? If these points of view suffer from a regime of state censorship, it must be history’s least effective example.

When it considers the case of Murthy v. Missouri later this year, the Supreme Court should clarify the standard for jawboning claims while rejecting the bulk of the Fifth Circuit’s radical and misguided ruling. Congress and the Executive Branch should assuage free expression concerns by considering guidelines and transparency requirements for government communications with platforms about online content. In the meantime, First Amendment advocates should abandon detached analysis in favor of reckoning with the organized campaign to shape law by putting conspiracies on the docket.

Authors

Dean Jackson
Dean Jackson is the principal behind Public Circle Research and Consulting and a specialist in democracy, media, and technology. Previously, he was an investigative analyst with the Select Committee to Investigate the January 6th Attack on the US Capitol and project manager of the Influence Operatio...

Topics