Home

TikTok Tug-of-War: Policymakers Conflicted Between Open Internet and Regulatory Control

Dean Woodley Ball / May 2, 2024

An image of TikTok CEO Shou Chew from his testimony to the US Senate Judiciary Committee at the Capitol in Washington DC, January 31, 2024 depicted next to a TikTok icon.

Two steps taken a mere 24 hours apart last week by the federal government underscore an inherent tension in US technology policy: Do policymakers want to “protect the open internet” or do they wish to exert more authority over what is and is not allowed online?

President Joe Biden signed a law forcing the divestiture of TikTok, the social media app owned by the Chinese firm Bytedance. Absent a divestiture, the app will be banned from US app stores (most notably those of Apple and Google) and forbidden from being hosted within US borders. The next day, the Federal Communications Commission (FCC) voted to reclassify broadband as a telecommunications service—a principle long known as “Net Neutrality”—in the interest of “protect[ing] the open internet.” Right now, in other words, the US government is pushing in both directions simultaneously.

This tension between openness and security is one of the most challenging unresolved questions in government today—yet policy decisions are often made without any acknowledgement that it exists. Without a clear understanding of this dynamic and the tradeoffs inherent in both directions, we risk sleepwalking into a regulatory regime that both limits competition online and puts Americans’ rights in jeopardy.

The TikTok bill notably does not forbid internet service providers (ISPs) from serving TikTok to users. Instead, it relies primarily on Apple and Google to be effective. They control the app stores through which TikTok is distributed to almost all users. TikTok does also exist as a website, but it’s not nearly as popular or feature-rich as the smartphone app.

In this sense, the government is not so much exerting authority over the internet as exploiting the power Apple and Google wield—the same power that policymakers in both the US and the European Union have attacked rhetorically and via policy.

Indeed, the EU’s Digital Markets Act is an instructive example. The aim of the DMA, at least as it applies to app stores, is to force Apple and Google to allow third-party app stores on their devices, as well as the direct downloads of apps from the web with no app store intermediary at all. This more open system resembles, in theory, the openness of the software marketplace on personal computers, where anyone can download and run whatever software they wish.

In practice, however, EU regulators did not want to make the tradeoffs that come with this kind of openness. In a truly open internet, any software is permissible, including malware and software designed for illicit purposes. Instead of accepting that tradeoff, regulators have chosen to force Apple and Google to police third-party app stores and sideloaded apps, though to a lesser extent than they police their own app stores.

The result, then, is that the EU is now a kind of meta-regulator sitting atop Apple and Google, who do the dirty work of regulating the market for smartphone software. Of course, both firms did the same thing before the DMA—the difference is that now, they answer to policymakers rather than to shareholders or customers.

A similar dynamic is playing out in the US, though more modestly. Multiple states have passed laws requiring social media companies to verify the ages of users before they can create accounts. The federal government is considering a similar law. Some of the laws, including the proposed federal law, also require social media services to obtain either parental consent or offer parental controls. None of these laws specify how exactly social media companies are to verify users’ ages or tie underage users to their parents in a reliable and secure manner. Instead, the companies are left to themselves to figure out that complex task.

Recognizing the difficulty of the challenge of online identity validation, social media companies, along with some policymakers, have pointed to none other than Apple and Google as the responsible parties. Regardless of the logic and constitutionality of these age limit laws (they will undoubtedly be litigated), the dynamic is clear: Instead of governing the internet directly, policymakers are looking to deputize technology firms—primarily Apple and Google—with quasi-governmental power.

There are numerous dangers associated with this regulatory path.

First of all, the current market power of Apple and Google in the smartphone market is, ultimately, a function of consumer preference. This kind of market power is more delicate than it might appear, because consumer preferences can and do change. Not so long ago, Microsoft seemed to have an unbreakable hold on all personal computing, and MySpace was the dominant player in social media. Perhaps the current smartphone ecosystem will wane in relevance, losing power to new, AI-enabled mobile devices powered by software from other platforms. Apps themselves may wane in relevance, with AI handling many of the information retrieval and manipulation tasks that consumers use apps for today.

Second, to the extent Apple, Google, and other tech companies come under regulators’ thumbs, their ability to innovate may diminish. They may come to resemble retail banks, encumbered as banks are with both heavy regulation and the obligation, through mechanisms such as Know Your Customer rules, to serve on the front lines of law enforcement. If their market power proves durable, either because the smartphone is a sticky form factor or because their size and power allow them to wield undue influence over regulators (or both), that would mean less focus on innovation and a far less consumer-friendly smartphone experience. How many people would like their experience with their most-used computing device to be more like interacting with a bank or an airline?

Finally, and most importantly, this is a problematic path because, ultimately, any firm through which the government exercises its power is not necessarily obligated to respect users’ constitutional rights or observe due process. Ultimately, then, the use of technology companies as governing intermediaries may enable greater abuses of power than the government could legally achieve on its own.

Regardless of one’s opinions about the merits of the TikTok ban, children’s use of social media, or even app store policies, policymakers and citizens alike should be aware of the dangers to the regulatory approach that is emerging. In the physical world, governments establish legitimacy in part by creating value, not simply commanding others. They build roads and other infrastructure and provide basic services—including the means to verify identification.

Policymakers would be wise to consider how that might translate to the digital world. The internet, at the end of the day, is not within the unambiguous and practical jurisdiction of any government, and especially not one that respects freedom of speech and other basic rights. It is a force to be contended with, and not simply a territory to be commanded.

Authors

Dean Woodley Ball
Dean Woodley Ball is a Research Fellow in the Artificial Intelligence & Progress Project at George Mason University’s Mercatus Center and author of Hyperdimensional. His work focuses on emerging technologies and the future of governance. He has written on topics including artificial intelligence, ne...

Topics