Home

Donate
Perspective

'Sovereignty' Myth-Making in the AI Race

Rui-Jie Yew, Kate Elizabeth Creasey, Suresh Venkatasubramanian / Jul 7, 2025

This piece is part of “Ideologies of Control: A Series on Tech Power and Democratic Crisis,” in collaboration with Data & Society. Read more about the series here.

NVIDIA CEO Jensen Huang delivers remarks as President Donald Trump looks on during an “Investing in America” event, Wednesday, April 30, 2025, in the Cross Hall of the White House. (Official White House photo by Joyce N. Boghosian)

In late May, US President Donald Trump made an official trip to a number of Arab Gulf States accompanied by over three dozen CEOs from US-based big technology companies that resulted in over $600 billion dollars worth of deals and celebratory proclamations by Gulf leaders, including Saudi Crown Prince Mohammed bin Salman, that their countries would now become hubs for independent, groundbreaking AI research and development in the Middle East. In what can only be described as an ironic confluence of events, G42 (the holding company for the United Arab Emirates AI strategy) was one of the partners, along with NVIDIA, at a France-sponsored event to build a European AI stack, while at the same time NVIDIA and other American tech companies were partnering with the UAE. The geopolitical era of sovereign AI is truly here.

Tech sovereignty didn’t start with AI. Initial discussions of internet sovereignty originated in China in the early naughts and 2010s. However, given the historic global dominance of US-based big technology companies, the appetite for sovereign AI — for self-sufficiency in the development of AI technologies — only began to develop in the first Trump administration's trade war with China in 2018. Many of the chips that US technology companies relied on were manufactured in Taiwan. As China became more belligerent towards Taiwan, concerns about global AI production grew, rising out of the question of what would happen to chip supply chains in the event of an all-out conflict between Taiwan and China. During the Biden administration, increasing US chip production capacity and limiting the export of powerful GPUs to China grew to become a top national security priority. (The Trump Administration has since rescinded the framework under which these controls were put in place, but has not removed the specific restrictions limiting GPU export to China.)

This intensifying adversarial relationship between the US and China, the newer and more aggressive assertion of American AI dominance by the Trump administration, and the ripple effects of these moves across Europe and across the globe — which have manifested as a fear of being left behind in the AI race— have all heightened the way countries prioritize sovereign control of the AI stack into their AI strategies.

'Sovereignty as a Service' (SaaS)

Big tech companies recognize these priorities, and are themselves shaping the rhetoric of sovereign tech by, effectively, offering sovereignty as a service. This is happening at three different levels of the tech stack. Firstly, NVIDIA’s CEO has boldly declared, “Every country needs sovereign AI.” Under this imperative, the company is laying down chips and hardware infrastructure around the world, from Denmark to Thailand to New Zealand. NVIDIA describes the components comprising this global infrastructure as “AI factories,” which spin natural resources and energy into tokens of intelligence.

Secondly, cloud service providers are also getting into the SaaS game, and are offering sovereignty not just to national governments, but also private entities. Amazon Web Services, the foremost cloud service provider, offers a “AWS European Sovereign Cloud.” Microsoft Azure and Google Cloud also offer sovereign cloud to private enterprises— including “sovereign” or “sovereignty” controls to private entities, which encompass encryption and data localization.

And finally, at the model building and dataset annotation level, open-source and multi-lingual AI have also been touted as supporting digital and AI sovereignty. HuggingFace has described open-source AI as a “cornerstone of digital sovereignty,” forming the foundation for “autonomy, innovation, and trust” in nations around the world. Countries around the world are funding the development of national language models: South Korea has recently announced that it will invest $735 billion in the development of “sovereign AI” using Korean language data. Together, governments and companies alike paint advantages in the performance of multilingual AI as sovereignty wins, promoting multilingual models as bolstering economic growth, commerce, and cultural preservation.

'Sovereignty' for you – control for me

An expansive view of digital sovereignty is that an entity — nation-state, regional grouping, community — should control its own digital destiny. The twist with SaaS is that the “clients” are negotiating away key aspects of their sovereignty in the process.

Consider NVIDIA. What appears to be a straightforward transaction — territory, energy, and resources in exchange for the company’s chips to build out national sovereign AI infrastructure — is complicated by the company’s other business interests. The company is also in the business of providing cloud services and developing its own AI models. These arms of business are also part of its sovereign AI package deal: the company is also training Saudi Arabia’s university and government scientists to build out “physical” and “agentic” AI. Besides laying the infrastructural groundwork in India, the company is also training India’s business engineers to use the company’s AI offerings.

NVIDIA’s AI models, like its multi-lingual offerings, would benefit significantly from the cultural and language data already being transmitted through its infrastructure. Government and enterprise use of NVIDIA's AI models through the company's AI API and cloud opens opportunities for NVIDIA to siphon high-quality data around the world to bolster its own offerings. That the language data extracted from these countries could be used to bolster governmental and enterprise client access to high-quality multi-lingual models, like the Nemotron language models, could provide a legitimate use that justifies the company’s collection and use of that data, which could instead enrich the company’s other models.

Finally, the company’s AI models have to be trained somewhere. Governmental lock-in to NVIDIA's infrastructure could mean that residents not only bear the costs of national AI production, but also that they bear costs of the company's operations. Other AI companies, such as Meta, have already tried to structure data center utilities such that residents would foot the power bill. The rhetoric of “sovereign AI” — that this infrastructure is beneficial to these countries and that the countries have control over AI production — further justifies costs for residents. This leaves those dependent on its infrastructure in a position to accept an attractive myth doused in technical language and the promise of national technological leadership, which buries a reality in which they may not be sovereign over their AI infrastructure — over how and the degree to which their territory and resources are used in the production of AI for their interests or for NVIDIA's.

Model building and data annotation: 'Sovereign AI' as labor and expertise extraction

By contributing their expertise to train multilingual models—seen as prime examples of sovereign AI—translators around the world are being placed in a vulnerable and uncertain position. They are annotating data for models that supplant their labor. The impacts of AI on translator roles are especially felt in Turkey, where translators have played a respected role in the country's diplomatic history. Rather than empowering communities that speak low-resource languages, multilingual models that cover languages spoken in these communities could instead play a role in their detriment. Cohere, which focuses on multilingual models, has formed a partnership with Palantir, which supplies software infrastructure to entities like US Immigration and Customs Enforcement (ICE). Human language annotators have been told that they should aim to convert the machine-like responses of LLMs into more human-like responses. The subtle cultural and lingual nuances that aim to be captured by “sovereign” multilingual models are arguably key to the resistance of political oppression. Indeed, culturally-specific emojis and nicknames have been used to counteract censorship. Enabling surveillant entities the access to language expertise could shut down avenues for resistance and the assertion of autonomy — of sovereignty.

Finally, a number of “sovereign” multilingual models are open-sourced or built from open-source models, which have themselves been painted as supporting sovereignty. While open-source models or synthetic models can be extremely worthwhile technological efforts, highlighting only these offerings can serve to downplay and ultimately bury the ways in which these models and language data and community involvement is serving proprietary multilingual models and more targeted business interests. It is important to remain vigilant to how the rhetoric that this labor and these models are in the service of cultural preservation can serve to obfuscate less savory uses of these models, from labor supplantation to surveillance.

'Sovereignty' for whom?

In the 19th-century, European powers deployed build-operate-transfer schemes, or BOTs, as a tool of colonial expansion. In these schemes, private, metropolitan companies provided the capital, knowledge, and resources to construct key pieces of infrastructure — railroads, ports, canals, roads, telegraph lines, etc. — either in formal colonies, like the British in India, or in places where their government was trying to expand power and influence, like the Germans in Anatolia, the heart of the Ottoman Empire, on the eve of World War I.

Sovereignty as a service represents a modern incarnation of this colonial mode. This rhetoric is part of a whole new political economy of global politics where traditional institutional sites of power are preserved as facades but hollowed out to create commodities that are accessed by subscription from what was formerly collective property, as Laleh Khalili has written in a recent London Review of Books essay on defense contractors. In contrast to two decades ago, when the US Department of Defense would have owned the software they operated and likely developed themselves, now they run corporate software, like products from Palantir, that they pay a regular subscription fee to access (and were sued to be forced into using). This kind of subscription model enables continuous rent extraction and the ability of the corporations not only to update or fix the software remotely, but also to turn it off at the source when the governments or institutions beholden to it don't act according to the corporation's wishes. If we take seriously the problematic metaphor of an AI arms race, or of a “war” to control the 21st century, then tech companies, with their SaaS offerings, are acting as arms dealers, encouraging the illusion of a race for sovereign control while being the true powers behind the scenes.

Authors

Rui-Jie Yew
Rui-Jie Yew is a PhD student at Brown, where she conducts research on AI policy. She holds an S.M. from MIT.
Kate Elizabeth Creasey
Kate Elizabeth Creasey is a historian interested in AI and sovereignty in the Global South. She is currently finishing her PhD dissertation on the 1980 military coup d'état in Turkey at Brown University.
Suresh Venkatasubramanian
Suresh Venkatasubramanian directs the Center for Tech Responsibility at Brown University, where he is a professor of Computer Science and Data Science.

Related

Podcast
Addressing Questions Over Europe's AI Act, Digital Sovereignty, and MoreJune 15, 2025
Perspective
Europe’s Digital Sovereignty is a Democratic ImperativeMay 14, 2025
Analysis
Reflections at the Edge of Democracy and TechJune 10, 2025

Topics