Home

Donate

The TikTok Ban Paradox: How Platform Restrictions Create What They Aim to Prevent

Alex Turvy, Rebecca Scharlach / Dec 18, 2024

Bans and blanket restrictions on social media, like the impending US TikTok ban or Australia’s recent age restrictions, are often presented as decisive solutions to complex problems. These measures promise to safeguard national security, protect user data, or shield vulnerable users from harm. Yet, they rarely achieve their intended goals. Instead, they create a paradox: rather than mitigating risks, such restrictions make platforms and user practices less governable. Users circumvent controls, oversight is fragmented, and transparency gives way to opacity—all while opportunities for meaningful governance are lost.

In the US, the TikTok ban is set to go into effect in just weeks, and uncertainty looms large. Regulators, courts, ByteDance, and recently re-elected President Donald Trump—always a wildcard— are all part of a nebulous assemblage shaping TikTok’s future. For policymakers, this moment is a minefield. For academics, it’s a chance to explore how platform bans reveal deeper contradictions in digital governance. And for American users, it’s a time of uncertainty and disruption. By examining what social media bans promise, what they have failed to deliver, and how they reshape the digital landscape, we can illuminate the broader implications of blanket restrictions for platform governance and global digital ecosystems.

The Paradox in Action

When governments impose bans or blanket restrictions on platforms, they rarely achieve their intended goals. Instead, they trigger contradictory effects that amplify the risks they claim to address. The proposed US ban on TikTok illustrates this dynamic, as do parallel developments like Australia's recent restriction on social media for users under 16. While different in scope, both measures ostensibly aim to enhance safety by mitigating perceived risks. Yet, these policies create unintended consequences that undermine their stated objectives in three key ways: driving users to less regulated spaces, fragmenting oversight, and reducing transparency.

Take ByteDance's compliance efforts. The company attempted to localize US TikTok user data through Project Texas, placing it under Oracle's management and establishing transparency measures. Beyond technical compliance, ByteDance has also invested in public relations campaigns such as "TikTok for Good," showcasing predominantly American TikTok creators who aim to do good for others and the world while positioning itself as a positive force for creativity and individual well-being. TikTok also uses more formal tools like its guidelines and policies to strategically position itself as a responsible steward – particularly for young users. Last year, ByteDance CEO Shou Zi Chew even directly called on US-based users to demonstrate their support for TikTok and express how much the platform means to them and how it improves their lives. Despite these efforts, regulators have remained unconvinced, although no concrete evidence has surfaced showing that TikTok has misused user data or shared it with the Chinese government. Following calls by fellow media and communication researchers, we argue this lack of evidence raises questions about whether the ban addresses actual risks or serves as a symbolic gesture driven by broader geopolitical anxieties. This reveals a key tension: platform bans are often less about addressing specific risks and more about projecting control in an increasingly fragmented digital landscape.

This desire to have control and project power is particularly strong in the US, where a long tradition of policies promoting cultural and economic hegemony shapes regulatory approaches to global platforms. Targeting TikTok reflects broader anxieties about non-US tech companies challenging Silicon Valley's dominance and fears of losing influence in the evolving digital economy. More broadly, it also reflects US concerns about protecting its position as a global superpower in the context of ascending Chinese global influence. In contrast, Australia’s under-16 social media restriction demonstrates a domestic approach, aiming to safeguard younger audiences rather than exerting geopolitical control. These contrasting motivations highlight how regulatory priorities differ depending on the broader concerns of each government. Regardless of intent, such bans often function as symbolic gestures that prioritize appearances over meaningful governance.

Australia’s under-16 social media restriction exemplifies the contradictory effects of platform bans. Ostensibly designed to protect youth, the measure has drawn criticism for its blunt implementation, which relies on unproven technologies to verify users' ages. Researchers argue that the restriction will push young people toward VPNs and less regulated platforms, leaving them more vulnerable while generating sensitive user data that could become a target for exploitation. Moreover, by pushing platforms like TikTok out of the country and their users into less visible spaces, such measures reduce transparency and accountability, but not necessarily the issues they were trying to tackle. Platforms operating in legal gray zones have little incentive to cooperate with regulators, share data with researchers, or invest in content moderation. By banning TikTok, the US risks driving millions of users to VPNs, where enforcement of data protection laws becomes nearly impossible, exacerbating the opacity of global data flows. In extreme cases, like TikTok’s potential withdrawal from the US market, such bans could sever regulatory engagement entirely, leaving users reliant on alternative access methods and further complicating oversight via a much more opaque platform ecosystem.

Past bans highlight similar patterns. For instance, excluding Chinese telecom firm Huawei from Western markets did not eliminate the company but drove its expansion into less regulated regions, reducing oversight and cooperation. Similarly, India's ban on Chinese-owned apps like WeChat and TikTok led to a proliferation of alternative global apps like Telegram or Signal or local alternatives like Chingari. While these outcomes align with nationalist goals of fostering local platforms, engaging with more broadly international ones, and reducing reliance on arguably more problematic foreign ones, they complicate broader governance. In Myanmar, the ban on Facebook following the 2021 military coup pushed activists to platforms like Twitter, where they faced new challenges, such as smaller reach and engagement, algorithmic visibility, and increased harassment from pro-military accounts. These cases suggest that bans displace, rather than resolve, governance challenges, pushing users toward more opaque and potentially riskier ecosystems.

Together, these examples underscore the paradox at the heart of platform bans: they aim to reduce risks but instead fragment oversight, drive users to less regulated spaces, and reduce transparency while claiming to increase security. This is not just policy failure—it's a fundamental miscalculation of TikTok's central role in the American economy, the significance of social media in users' daily lives, and the dynamic ways platforms and their communities adapt to external pressures.

Platforms and the Geopolitics of Control

At the core of the TikTok ban paradox lies a fundamental misunderstanding of platforms as discrete entities that can simply be controlled or eliminated. Platforms are, in reality, adaptive networks shaped by the interplay of users, algorithms, and markets. As researchers Tarleton Gillespie and Anne Helmond have shown, the embeddedness of platforms in broader sociotechnical systems makes blunt tools like bans counterproductive, as they fail to account for the cross-platform flows of data, content, and user practices that define today's digital ecosystems.

Moreover, platform bans are deeply intertwined with geopolitical struggles. The proposed TikTok ban reflects US concerns not just about data privacy but about preserving its cultural and economic dominance in the face of rising Chinese influence. Social scientist Safiya Umoja Noble reminds us that platforms and their governance are never neutral; they are entangled with global power structures. The TikTok ban, then, can be seen as a symbolic effort to reassert U.S. control over the digital sphere, even as it exacerbates the risks it claims to address.

The paradox is clear: these measures don’t mitigate risks—they displace them. They reduce transparency, fragment oversight, and perpetuate the underlying logics of data extraction and surveillance capitalism. Without addressing these systemic drivers, banning one platform merely creates space for another to emerge and repeat the same cycles of harm and exploitation —or enables existing platforms, like Meta, to reassert dominance, raising new concerns about unchecked market power and geopolitical influence. This understanding of platforms as adaptive networks and the geopolitical stakes of their governance points toward the need for a more strategic, collaborative approach that prioritizes transparency, accountability, and global cooperation over simplistic restrictions.

Beyond Bans: Embracing the Good, the Bad, and the Ugly

The looming US TikTok ban underscores the urgent need for governance models beyond binary restrictions. The EU’s Digital Services Act (DSA) offers a step in the right direction, emphasizing transparency, accountability, and systemic risk mitigation rather than outright bans. By requiring platforms to assess risks, collaborate with regulators, and ensure clear terms of service, the DSA reflects a more balanced understanding of platform governance in a globalized digital landscape. However, new regulatory approaches like the DSA are not without flaws. Platform regulation expert Daphne Keller points out that platform companies have established a so-called “compliance culture,” regulating speech to comply with regulatory standards and avoid trouble.

Policymakers must prioritize flexible, iterative frameworks that incentivize compliance and collaboration over punishment. Independent oversight bodies, regular algorithmic audits, and robust data protection standards are essential components of such frameworks. These models must also be participatory, incorporating input from civil society organizations, academic experts, and affected communities. Governance frameworks that embrace complexity are more likely to balance innovation, accountability, and user protection.

The US TikTok ban is a critical juncture that lays bare the paradoxes and limitations of current government regulatory paradigms. This decision will have far-reaching implications for platform governance, from how we balance national sovereignty with global digital ecosystems to how we approach data privacy and algorithmic accountability. While the path forward is uncertain, one thing is clear: the era of simplistic bans must end. Instead, we must embrace governance models that reflect the fluid, adaptive, and interconnected realities of the platformed age, prioritizing nuance, flexibility, and collaboration over simplistic, unilateral solutions. In this way, the TikTok ban represents more than just a policy failure – it’s a missed opportunity to lead the way by recognizing the reality of what platforms are and the necessity of adaptive approaches to their governance.

Related Reading

Authors

Alex Turvy
Alex Turvy is a sociologist of digital culture and a PhD candidate at Tulane University. His mixed-methods research examines the interplay between platform governance and user practices with a focus on TikTok and Instagram. His work has been published in Social Media + Society, the International Jou...
Rebecca Scharlach
Dr. Rebecca Scharlach is a Postdoctoral Researcher at the Platform Governance, Media & Technology Lab at the University of Bremen (ZeMKI). Her research investigates how tech companies navigate the integration of generative AI and strive to uphold core values amid technological and regulatory changes...

Topics