Home

Donate

Five Things To Watch For In NetChoice Supreme Court Oral Argument

Ben Lennett / Feb 24, 2024

Silhouettes of Florida and Texas superimposed on an image of the Supreme Court.

On Monday, the US Supreme Court will hear two cases, Moody v. NetChoice, LLC and NetChoice, LLC v. Paxton. The Court is reviewing two state laws targeted at social media companies and assessing whether they violate the First Amendment. Broadly speaking, the laws place restrictions on how social media platforms can moderate the content of users, including, in the case of the Florida law, a platform’s ability to ban or deplatform users, as well as establish obligations for social media platforms to disclose content moderation policies and decisions to affected users.

Both cases arrive before the Supreme Court after making their way through different district and appellate courts. In Moody v. NetChoice, NetChoice and CCIA, industry groups representing the social media companies, won an injunction of the Florida law in the US District Court for the Northern District of Florida, which the Eleventh Circuit Court of Appeals later upheld. NetChoice and the Computer and Communications Industry Association (CCIA) also initially received a favorable ruling from a US District Court in Texas In NetChoice, LLC v. Paxton, which blocked the State of Texas from enforcing its social media law. The US Circuit Court of Appeals for the Fifth Circuit later upheld the Texas law and reversed the district court's injunction.

The two cases represent a circuit split where courts interpret federal and constitutional law differently, making them ripe for Supreme Court review. As we tune into Monday’s oral arguments, Tech Policy Press will be watching for the following five things.

Will the justices be more comfortable with these social media cases?

Last year, the Supreme Court heard oral arguments in two related cases, Gonzalez v. Google, LLC, and Twitter, Inc. v. Taamneh. Those cases involved families who lost loved ones in terrorist attacks and were suing social media companies for the platforms’ alleged role in supporting terrorist groups and terrorist activities. The Court’s review included questions about Section 230 of the Communications Decency Act and the Justice Against Sponsors of Terrorism Act (JASTA). In the Gonzalez case, civil society and industry groups submitted numerous briefs defending or critiquing the federal court's interpretation of Section 230.

Yet the most memorable moment from the oral argument came from Justice Elena Kagan. In an exchange concerning the scope of Section 230’s protection with counsel representing the families before the court, she proclaimed, to the laughter of the audience in the court, “.... I mean, we're a court. We really don't know about these things. You know, these are not like the nine greatest experts on the Internet.

Given that the court in this case granted a petition for review in both cases to determine their conformity with the First Amendment, the justices may have a clearer understanding of the issues at hand. Of course, in Gonzalez and Taamneh, the court was able to take an off-ramp to largely avoid addressing any major questions around the scope of liability protections for platforms provided by Section 230. They likely will not have that option in these cases with respect to First Amendment protections for social media platforms.

What will Justice Thomas' position be?

Another surprise during the Gonzalez and Taamneh oral arguments and the later decision was the extent to which Justice Thomas largely defended Section 230. Prior to those cases, Justice Thomas had written critically of the social media’s content moderation practices and the liability protections afforded to them. In a statement attached to the Supreme Court’s decision to reject a writ of certiorari in a case involving two companies that provide software to enable content filtering, the justice questioned the expansive interpretation of Section 230 by courts: “Adopting the too-common practice of reading extra immunity into statutes where it does not belong… courts have relied on policy and purpose arguments to grant sweeping protection to Internet platforms.”

However, despite authoring the opinion in the Taamneh case, Justice Thomas left Section 230 alone. The opinion defended social media companies' discretion concerning removing certain content or users, finding that “Plaintiffs’ complaint rests heavily on defendants’ failure to act, yet plaintiffs identify no duty that would require defendants or other communication-providing services to terminate customers after discovering that the customers were using the service for illicit ends.”

More germane to the NetChoice case is a concurring opinion Justice Thomas wrote in Biden v. Knight First Amendment, a case involving former President Trump’s actions to block certain Twitter users using his personal account. In that opinion, Justice Thomas wrote that:

“The long history in this country and in England of restricting the exclusion right of common carriers and places of public accommodation may save similar regulations today from triggering heightened scrutiny—especially where a restriction would not prohibit the company from speaking or force the company to endorse the speech…. There is a fair argument that some digital platforms are sufficiently akin to common carriers or places of accommodation to be regulated in this manner.”

If this argument sounds familiar, it is because it is the central argument that the states of Florida and Texas, as well as other defenders of the laws, make to justify the limits they are placing on the content moderation decisions of social media platforms. Indeed, it is certainly possible Justice Thomas’s opinion influenced the authors of both bills. Citing Thomas’s opinion several times, the Fifth Circuit Court of Appeals also agreed with the State of Texas’s categorization of platforms as common carriers in its decision to overturn an earlier district court decision that found the Texas law was likely unconstitutional.

Will the justices be receptive to the Fifth Circuit’s interpretation of the First Amendment?

The Fifth Circuit’s decision came as a shock and surprise to many, as it completely contradicts the district court's earlier decision in NetChoice v. Paxton, as well as the district and appellate court decisions in Moody v. NetChoice. As NetChoice argues in its brief in the Paxton case:

“Rather than focus on this Court’s precedent and the clear rule it compels, the Fifth Circuit majority spent much of its opinion engaging in its own examination of what it deemed to be “the original public meaning of the First Amendment.”

Those earlier district and appellate court decisions broadly recognized that social media’s content moderation activities should be considered editorial discretion and thus protected by the First Amendment. Courts have generally protected both the right of private parties to speak and the right to decide what not to speak. In this case, the Florida and Texas laws restrict the ability of social media platforms to make those decisions as part of their moderation activities. As the district court found in the Paxton case, “Social media platforms have a First Amendment right to moderate content disseminated on their platforms.”

In addition, the courts found provisions in both social media laws should trigger strict scrutiny to determine whether they violate First Amendment protections because they are content-based and dependent on the viewpoint of a message or, the speaker, or the speaker itself. For example, the Florida law includes provisions that prohibit social media platforms from de-platforming or removing any user that is a known candidate for elected office and restrictions on social media companies' ability to moderate the content of a “journalistic enterprise.” The federal district court in Moody found that the Florida law’s content-based restrictions were subject to strict scrutiny, where courts must determine whether the law “furthers a compelling state interest” and is “narrowly tailored to achieve that interest.” The Court determined those restrictions did not survive strict scrutiny, thus violating the First Amendment.

The Fifth Circuit, however, did not recognize the First Amendment protections for “editorial discretion.” Instead, as Moshe Klein and Lindsay Maher explain, the Fifth Circuit found “that such discretion only arises where a law compels or restricts the speech of the private party itself—whereas the Texas law concerns not platforms’ own speech, but how platforms treat users' speech.” As such, the court found no “evidence in the record…” that Texas law “could in fact suppress any constitutionally protected speech by anyone…,” and as such, there is “no basis… for subjecting [it] to strict scrutiny.”

It is unclear if the Fifth Circuit’s contrarian interpretation will sway even the more conservative justices on the panel, given how much of a departure it appears to be from existing precedent. Still, it is worth noting that the Fifth Circuit, before deciding the Paxton case, abruptly lifted the temporary injunction put in place by the lower court. The groups representing social media platforms filed an emergency application to the Supreme Court. The Court ultimately reinstated the preliminary injunction preventing enforcement of the law, but Justices Thomas, Alito, and Gorsuch dissented. In the dissent, authored by Justice Alito, the justices provide that “[i]t is not at all obvious how our existing precedents, which predate the age of the internet, should apply to large social media companies, but Texas argues that its law is permissible under our case law.”

How much consideration will the justices give to the specific boundaries of the First Amendment for social media?

Numerous freedom of speech and public interest groups filed amicus briefs in the NetChoice cases in support of neither party, urging the court to be careful in determining how much or little protection the First Amendment provides to social media’s moderation activities. For example, the Knight First Amendment Institute argues in its brief that “none of the parties in this case offers a compelling theory of how the First Amendment should apply to the regulation of social media.” The arguments from the states that “platforms’ content-moderation decisions do not implicate the First Amendment at all” and could lead to “sweeping [governmental] authority over the digital public sphere…” Conversely, platforms “arguing that any regulation implicating their content-moderation decisions must be subjected to the most stringent First Amendment scrutiny…” “would make it nearly impossible for governments to enact even carefully drawn laws that serve First Amendment values.”

State governments also want to ensure that the Court preserves some ability for them to regulate social media platforms, no matter how it rules in the NetChoice cases. A brief from several states, including New York, urged the “Court to make clear that States may regulate social media platforms consistent with the First Amendment.” They argue that “the Court must consider each challenged provision individually. Where a provision applies only to non-expressive conduct or otherwise has little or no effect on speech, the First Amendment does not limit States’ authority to regulate.”

These concerns are not unwarranted. The First Amendment is increasingly being used by the courts to restrict government regulation in areas of business that would seem to have no direct connection to speech. The most recent example was a decision involving a NetChoice-led legal challenge to the California Age Appropriate Design Code. Last September, the US District Court for the Northern District of California granted apreliminary injunction in favor of NetChoice on the grounds that the Act violates the First Amendment. Among the findings of the court in NetChoice v. Bonta was that the Act’s prohibitions on websites' collection, sale, sharing, or retention of children’s personal information “limit the “availability and use” of information by certain speakers and for certain purposes and thus regulate protected speech.”

In an article for the Atlantic, former White House Advisor Tim Wu argues that the court’s “little feat of logic creates an extraordinary immunity that may well protect nearly everything” that a social media platform does with data. There could be a similar danger in the NetChoice cases. As a brief from the Electronic Privacy Information Center (EPIC) argues, “... if the court were to adopt a maximalist reading that every decision a social media company makes as to the arrangement and dissemination of user-generated content is editorial judgment, every privacy law would directly or indirectly burden speech, and many would be impossible to enforce.”

Where does Section 230 fit into the Court’s review, if at all?

One notable omission from many of the briefs submitted in the NetChoice cases was Section 230. The law, which fundamentally shapes most policy debates concerning social media platforms, is not specifically under consideration by the Supreme Court in the case. The Court has limited its review to two specific questions:

1. Whether the laws’ content-moderation restrictions comply with the First Amendment.

2. Whether the laws’ individualized-explanation requirements comply with the First Amendment.

Both the State of Florida and NetChoice petitioned the Supreme Court only to review the First Amendment implications of the laws in question. We do not pretend to understand the particular legal reasoning behind the Court’s narrow review. In the Moody case, the district court held that provisions in the Florida law were preempted by Section 230, finding, for example, “the federal statute also preempts the parts of [the] Florida Statutes... that purport to impose liability for other decisions to remove or restrict access to content.”

However, on appeal from the Florida Attorney General, the Eleventh Circuit Court of Appeals did not assess the merits of a 230 pre-emption challenge: “Because we conclude that the Act’s content-moderation restrictions are substantially likely to violate the First Amendment, and because that conclusion fully disposes of the appeal.” Similarly, in the Paxton case, the district court noted in its decision that it “need not and does not reach the issues of whether [the Texas law] is… preempted by the Communications Decency Act.”

To the extent the Fifth Circuit discusses Section 230, it uses it to find the Texas social media law to be constitutional, arguing that “Section 230 reflects Congress’s judgment that the Platforms do not operate like traditional publishers and are not “speak[ing]” when they host user-submitted content.” Thus, “Congress’s judgment reinforces our conclusion that the Platforms’ censorship is not speech under the First Amendment.”

Interestingly, the plaintiffs in the aforementioned Gonzalez and Taamneh cases filed a brief in response to the State of Florida’s petition for writ of certiorari to the Supreme Court that pointed to “the principle of constitutional avoidance,” which dictates that courts not decide “constitutional questions” if a case can be resolved on other grounds.” It argues that this Court does not need to decide these cases on First Amendment grounds but rather review NetChoice’s section 230 arguments. “The parties clearly disagree, but the questions about which they disagree concern only the meaning and implications of section 230.”

The Supreme Court decided otherwise, but many briefs pointed to Section 230 to both oppose and support the states’ social media laws. For example, former congressman Christopher Cox and Senator Ron Wyden (D-OR), who co-authored Section 230 of the Communications Decency Act while they were both US Representatives, filed a brief on behalf of NetChoice. They argued that the “Fifth Circuit erroneously invoked Section 230 in support of its conclusion that internet platforms are mere conduits without First Amendment rights to editorial discretion.” Instead, “Internet platforms are speakers with First Amendment rights to edit and moderate the third-party content they publish. That is why Congress enacted Section 230 in the first place.”

In a brief in support of Paxton, Senator Josh Hawley (R-MO) argued the opposite, “Under Section 230, providers shall not be treated by courts as the publishers of others’ speech because, in fact, they are not. They are, in principal part, conduits.” He points to the Court’s recent decision in Taamneh, where it noted that the platforms’ moderation was “passive and content-agnostic.” Hawley’s interpretation of Taamneh underscores that even though the Court did not appear to touch Section 230 in the decision, how it framed the operation of social media platforms may have opened the door for the Texas and Florida laws.

In an article for Brookings last year, Stanford scholar Daphne Keller predicted such a development: “Taamneh’s characterization of platform moderation practices may shape both future must-carry and must-remove cases. Regarding must-carry, Taamneh makes it sound as if platforms are already more or less what Texas and Florida want them to be: common carriers with no rules or preferences for user speech.” Thus, even if Section 230 is not specifically on the docket in the NetChoice cases, the Court’s decision could still influence how courts and governments interpret the law’s protections for social media and other internet services.

Helpful resources for the NetChoice cases

Authors

Ben Lennett
Ben Lennett is managing editor for Tech Policy Press and a writer and researcher focused on understanding the impact of social media and digital platforms on democracy. He has worked in various research and advocacy roles for the past decade, including as the policy director for the Open Technology ...

Topics