What the Supreme Court Got Right in the TikTok Decision
Tim Bernard / Jan 29, 2025On Friday, January 17, the United States Supreme Court unanimously upheld the constitutionality of the Protecting Americans from Foreign Adversary Controlled Applications Act, the law that requires China's ByteDance to divest TikTok or face a ban on the app. The Court deferred to the US government in its assessment that China’s potential access to user data presents a national security risk to the US. However, the government also argued that the risk of content manipulation by China was a similar threat. This claim was largely ignored by the majority opinion, which deemed the data security issue alone sufficient to uphold the law.
Jeffrey Fisher, counsel for a group of TikTok creators and a law professor at Stanford University, had challenged the government’s latter determination during oral arguments. In his opening statement, he claimed that “under the First Amendment, mere ideas do not constitute a national security threat.” This argument was affirmed by Justice Neil Gorsuch in his concurring opinion:
One man’s ‘covert content manipulation’ is another’s ‘editorial discretion.’ Journalists, publishers, and speakers of all kinds routinely make less-than-transparent judgments about what stories to tell and how to tell them. ... It makes no difference that Americans (like TikTok Inc. and many of its users) may wish to make decisions about what they say in concert with a foreign adversary.
The lack of transparency that Gorsuch references was hinted at during oral arguments by Justice Elena Kagan, in an observation that the motivations behind social network recommendation feeds are typically hidden:
I mean none of these are apparent, right? You get what you get and you think, ‘That's puzzling,’ and it is all a little bit of a black box.
The obscurity and complexity of these algorithms raise another set of issues—less ideological and more technical than if “mere ideas” can constitute a national security risk—that also goes to the heart of the supposed risk that TikTok uniquely represents. But this went undiscussed during oral arguments and appears neither in the decision nor in most of the briefs.
The exception is an amicus brief from Milton L. Mueller, a Georgia Tech professor and director of the Internet Governance Project. Mueller argues that any full-scale transformation of TikTok into a Chinese propaganda machine would be noticeable to the users and would likely put them off, while subtler attempts would have very limited chances of achieving significant influence.
The government’s content manipulation case was grounded in the assumption that if China was merely able to adjust TikTok’s feed algorithm, it could “sow... discord and disinformation” amongst US users. There are three questions that must be answered to evaluate the severity of this threat, none of which have clearly established answers:
- What is the potential of social media algorithms (not platforms as a whole) to have a significant impact on society?
- What are the mechanisms through which they have this impact?
- To what degree is China able to effectively wield them to achieve specific goals?
One academic study cautions that “the current evidence on how algorithms affect well-being, misinformation, and polarization suggests that the role of algorithms in these phenomena is far from straightforward and that substantial further empirical research is needed.” While scholars have suggested methods for changing recommender algorithms to, for example, reduce polarization or the spread of misinformation, these have not been extensively and publicly field-tested, let alone in the unique TikTok context. (Modest tests on Facebook did not reveal any panaceas.)
Metrics like user engagement and time spent on platform are relatively easy to measure and thus to iteratively optimize for, and are therefore adopted by platforms as proxies for user satisfaction, despite some serious flaws. User opinion may be even more difficult to evaluate, especially covertly. When Facebook adopted the new north star of maximizing “meaningful social interaction” in its feed, it appears to have—inadvertently—exacerbated the spread of polarizing content, according to documents shared by whistleblower Frances Haugen. The process of adopting this new metric, however, involved large-scale surveys, numerous experiments, and the testing of multiple metrics to identify unintended consequences. These would be incredibly difficult to replicate in secret.
Any Chinese attempts at increasing polarization or the spread of misinformation (let alone specific viewpoints supporting China’s interests) would, therefore, be highly speculative, and any intervention might be prone to backfire. Professor Meuller’s brief also explains that any substantial interference with TikTok’s For You Feed may well imperil the very thing that makes the app compelling for its users.
It is unlikely, therefore, that there is a big red button marked “misinformation” or “polarization” that Chinese operatives can just push. Regarding misinformation in particular, some platforms do have content moderation classifiers that attempt to reduce the spread of misinformation, and these can be turned off. However:
- TikTok’s brief states that content moderation policy and implementation are under the control of the US entity, so ByteDance in China may not operate misinformation classifiers directly.
- Even if it does, it is unlikely that this would be effective or avoid the notice of US employees. TikTok’s most recent transparency report under the EU Code of Practice on Disinformation explains that, although it does use machine learning classifiers, unlike in some other policy areas, its approach to moderating misinformation relies more comprehensively on human moderation.*
- TikTok US may, with its moderation efforts, detect and limit the spread of content that the Chinese authorities sought to disseminate, thereby counteracting any strong deleterious effects.
- Changes that result from such a move, if effective, may be noticeable to users.
(Several of these uncertainties could have been clarified through congressional investigation, but that does not appear to have happened.)
So much for the gaps in the government’s theory regarding TikTok’s algorithms and misinformation. Some academics, however, have made progress in developing reasonably well-supported theories about how our social media feeds do play a role in the decay of our information ecosystem. Research suggests that mis- and disinformation online rarely directly convince people of things that they formerly did not believe. As Nieman Lab’s Joshua Benton put it, “Most misinformation reaches people who are already misinformed—or at least very open to being misinformed.” A recent article by journalist Charlie Warzel and researcher Mike Caulfield locates the danger of online misinformation in its capacity to provide limitless confirmation for what users already believe.
When it comes to recommendation engines, Professor Mueller’s brief notes that “algorithms detect and respond to preferences, and can amplify or diminish pre-existing attitudes, but do not create them.” If this is indeed the true risk from social media feed algorithms, then current engagement-based algorithms on TikTok and other platforms are already well-optimized to give us more of what we already like. To the best of our understanding, it seems that it is this industry-standard practice that fuels information pathology—without requiring any extra help from foreign adversaries.
____
* See TikTok's Code of Practice on Disinformation report for the period January 1, 2024 - June 30, 2024:
However, misinformation is different from other content issues. Context and fact-checking are critical to consistently and accurately enforcing our misinformation policies. So while we use machine learning models to help detect potential misinformation, ultimately our approach today is having our moderation team assess, confirm, and remove misinformation violations. We have misinformation moderators who have enhanced training, expertise, and tools to take action on harmful misinformation. This includes a repository of previously fact-checked claims to help misinformation moderators make swift and accurate decisions and direct access to our fact-checking partners who help assess the accuracy of new content.”