Home

Donate

Anatomy of an AI Coup

Eryk Salvaggio / Feb 9, 2025

DOGE is gutting federal agencies to install AI across the government. Democracy is on the line, writes Tech Policy Press fellow Eryk Salvaggio.

Washington, DC, January 20, 2025: Billionaire tech entrepreneur and Department of Government Efficiency "special government employee" Elon Musk looks up in the Capitol Rotunda ahead of the inauguration ceremony where Donald Trump was sworn in as the 47th US President. ((Photo by CHIP SOMODEVILLA/POOL/AFP via Getty Images)

Artificial intelligence (AI) is a technology for manufacturing excuses. While lacking clear definitions or tools for assessment, AI has nonetheless seized the imagination of politicians and managers across government, academia, and industry. But what AI is best at producing is justifications. If you want a labor force, a regulatory bureaucracy, or accountability to disappear, you simply say, “AI can do it.” Then, the conversation shifts from explaining why these things should or should not go away to questions about how AI would work in their place.

We are in the midst of a political coup that, if successful, would forever change the nature of American government. It is not taking place in the streets. There is no martial law. It is taking place cubicle by cubicle in federal agencies and in the mundane automation of bureaucracy. The rationale is based on a productivity myth that the goal of bureaucracy is merely what it produces (services, information, governance) and can be isolated from the process through which democracy achieves those ends: debate, deliberation, and consensus.

AI then becomes a tool for replacing politics. The Trump administration frames generative AI as a remedy to "government waste." However, what it seeks to automate is not paperwork but democratic decision-making. Elon Musk and his Department of Government Efficiency (DOGE) are banking on a popular but false delusion that word prediction technologies make meaningful inferences about the world. They are using it to sidestep Congressional oversight of the budget, which is, Constitutionally, the allotment of resources to government programs through representative politics.

While discussing an AI coup may seem conspiratorial or paranoid, it's banal. In contrast to Musk and his acolytes' ongoing claims of "existential risk," which envision AI taking over the world through brute force, an AI coup rises from collective decisions about how much power we hand to machines. It is political offloading, shifting the messy work of winning political debates to the false authority of machine analytics. It's a way of displacing the collective decision-making at the core of representative politics.

The Cast

We can set the stage by describing the cast. In Elon Musk's part-time job at DOGE, he takes the lead role. His team aims to use generative AI to find budget efficiencies even as he eviscerates the civil service. The DOGE entity has already attempted to take over the Treasury Department's computer system to distribute funds and effectively disbanded USAID. Musk hopes to deliver an "AI-first strategy" for government agencies, such as GSAi, "a custom generative AI chatbot for the US General Services Administration."

Thomas Shedd, a former engineer at Tesla who now serves as the General Services Administration's Technology Transformation Services director, is tasked with this mission. Shedd has said that "the federal government needs a centralized data repository" for analyzing government contracts despite dubious legality around data preservation and privacy.

Then there is the supporting cast. First is a team of bit players that serve as DOGE operatives. These engineers, some reportedly aged 19 to 24, arrived at various government agencies to take control of computer systems without even giving their full names or stating their purpose. Though the youngest just graduated from high school, this crew has interfered with networks at the Centers for Disease Control and the Centers for Medicare and Medicaid Services, with Musk refusing to discuss what DOGE is doing with the data. On February 5th, they began "data mining" veteran's benefits and disability compensation records of the Department of Veterans Affairs. The list goes on.

Finally, we have the supposed grown-ups. In Trump's executive order advancing AI, the President calls for "AI systems that are free from ideological bias or engineered social agendas" and to revoke "directives that act as barriers to American AI innovation," a plan for which will be developed by the new AI and crypto czar, venture capitalist David Sacks. Sacks will be joined in this role by Project 2025 architect Russell Vought as head of the Office of Management and Budget (OMB) and by the past and current Trump administration White House Chief Technology Officer and nominee for director of the White House Office of Science and Technology Policy, Michael Kratsios.

The Plan

Amidst the chaos in Washington, Silicon Valley firms will continue to build their case that they are the answer. We can expect another industry announcement of a radical new capability for AI in the near future. OpenAI may once again claim to reach PhD-level intelligence (as in September 2024 and again in January 2025), or DOGE may launch a new chatbot trained on government data.

After months of ridiculing the civil service for ineptitude and shouting about the woke politics of academic research institutions, a new delusion about word prediction will likely emerge from any such announcement. In that story, the beneficiaries of the AI coup will announce the perfect solution to government failures and their reviled commitment to diversity, equity, and inclusion in science. The solution will be a "centralized data repository" hooked to a chatbot and a suite of promises.

Shedd described such a project on tape at a meeting with his new team:

Because as we decrease the overall size of the federal government, as you all know, there's still a ton of programs that need to exist, which is this huge opportunity for technology and automation to come in full force, which is why you all are so key and critical to this next phase…It is the time to build because, as I was saying, the demand for technical services is going to go through the roof.

To serve its purpose, any generative AI deployed here wouldn't have to be good at making decisions or even showcase any new capacities at all. It merely has to be considered a plausible competitor to human decision-making long enough to dislodge the existing human decision-makers in civil service, workers who embody the institution's values and mission. Once replaced, the human knowledge that produces the institution will be lost.

Once the employees are terminated, the institution is no longer itself. Trump and Musk can shift it to any purpose inscribed into the chatbot's system prompt, directing the kind of output it is allowed to provide a user.

After that, the automated system can fail–but that is a feature, not a bug. Because if the system fails, the Silicon Valley elite that created it will secure their place in a new technical regime. This regime concentrates power with those who understand and control this system's maintenance, upkeep, and upgrades. Any failure would also accelerate efforts to shift work to private contractors. OpenAI's ChatGPTGov is a prime example of a system that is ready to come into play. By shifting government decisions to AI systems they must know are unsuitable, these tech elites avoid a political debate they would probably lose. Instead, they create a nationwide IT crisis that they alone can fix.

Weaken the Opposition

As the technical elite embeds generative AI into hollowed-out institutions, the administration will carry on its effort to eviscerate independent research institutions. Trump campaigned in 2023 for an "American University," an online resource presenting "study groups, mentors, industry partnerships, and the latest breakthrough in computing" that "will be strictly non-political, and there will be no wokeness or jihadism allowed." Trump proposed that American University would be funded by "taxing, fining, and suing excessively large private university endowments."

Pair this with the Trump administration's reported keyword-based system that rejects research grants even tangentially focused on diversity and inclusion. Much of this will impact universities' scientific research and their finances. Eventually, this would create a crisis through which higher education, with its commitments to diversity already neutered, could be starved to death. A weakened university research ecosystem would strengthen the private sector by luring scientists to their labs, diminishing independent research oversight.

Lists of reported keywords that flag a grant application for rejection by the National Science Foundation include terms linked to debiasing AI. Experts have long known that algorithmic systems are biased toward the majority because they favor statistically dominant samples. However, words used in research to study and address algorithmic bias — including the word "bias" — are now red flags for funding. Other words relevant to the study of algorithmic bias abound, such as studying "underrepresented" populations or "systemic" biases in training data. This is all part of Trump’s promise to expunge “Radical Leftwing ideas” about AI.

Seizing congressional oversight of government spending and programs enacted by statute and handing them to an automated system would be the first sign that the AI coup is complete. It would signal the transition from democratic governance to technocratic automatism, in which the engineers determine how to co-opt Congressional funding toward the goals of the executive branch. Refusing to share insight into the system's outputs — deferring to a combination of security or even commercial concerns and myths of "black box" neural networks — would shield it from any real scrutiny by Congress.

DOGE aims to replace government bureaucracy with technical infrastructure. Reversing and dismantling dependencies embedded in infrastructure is slow and difficult, especially when efforts to study systemic bias are prohibited. The ingredients for "technofascism" will be assembled.

Generating a Crisis

Eventually, the shoddy infrastructure of these automated government agencies and services will produce language or code that creates an AI-driven national crisis. Because no AI system is presently suited to the complex task of governance, failure is inevitable. Deploying that system anyway is a human decision, and humans should be held accountable.

The designers of AI have repeatedly told us that it poses a threat akin to the atomic bomb.

Langdon Winner once wrote that the infrastructural requirements of a nation with nuclear weapons "demand that it be controlled by a centralized, rigidly hierarchical chain of command closed to all influences that might make its workings unpredictable. The internal social system of the bomb must be authoritarian; there is no other way."

The bomb is a real risk to human life. But Winner warns that the mere perception of a technology's risk can inspire social rigidity around its use that spills into society. What happens when the "atomic bomb" is an AI system claiming to automate decisions within a democratic government?

Repeated, vague warnings about the dangers of artificial generative intelligence from Sam Altman, Elon Musk, and others have primed the public to believe such a threat is on the horizon. They argue that combining words into compelling arrangements will lead to our physical annihilation.

Unfortunately, many legislators believe this myth. Years of bipartisan lobbying by groups focused narrowly on AI's "existential risks" have positioned it as a security threat controllable only by Silicon Valley's technical elite. They now stand poised to benefit from any crisis.

Since automated systems cannot ensure secure code, likely scenarios include a data breach to an adversary. In a rush to build at the expense of safety, teams might deploy AI-written software and platforms directly. Generated code could, for example, implement security measures dependent on external or compromised assets. This may be the best-case failure scenario. Alternatives include a "hallucination" about sensitive or essential data that could send cascading effects through automated dependencies with physical, fatal secondary consequences.

How might the US respond? The recent emergence of DeepSeek — a cheaper and more efficient Chinese large language model — provides a playbook. Politicians of both parties responded to the revelation of more affordable, less energy-dependent AI by declaring their commitment to US AI companies and their approach. A crisis would provide another bipartisan opportunity to justify further investment in the name of an AI-first US policy.

Algorithmic Resistance

The AI coup emerged not just from the union of Donald Trump and Elon Musk. It is born of practices and beliefs now standard among Silicon Valley ideologues that are obscure to most Americans. However, the tech industry's weakness is that it has never understood the emotional and social complexity of actual human beings.

Much of what I describe above assumes a passive public, a compliant bureaucracy, and a do-nothing Congress. It assumes that operating in a legal gray area is a way to evade judicial oversight. This tactic is well-known in Silicon Valley, where technical innovation is increasingly rare, but regulatory evasion is common.

Speed is essential to their work. They know they cannot create a public consensus for this effort and must move before it takes shape. By moving fast and breaking things, DOGE forces a collapse of the system where unanswered questions are met with technological solutions. Shifting the conversation to the technical is a way of locking policymakers and the public out of decisions and shifting that power to the code they write.

The AI coup depends on a frame of government efficiency. This creates a trap for Democratic representatives, where arguing to keep government services–and government employees–will be spun as supporting government waste. But this is also an opportunity. AI achieves "efficiency" by eradicating services. AI, like Big Data before it, can use convenience and efficiency to bolster claims to expand digital surveillance and strip away democratic processes while diminishing accountability.

Do not fall for the trap. Democratic participation and representative politics in government are not "waste." Nor should arguments focus on the technical limits of particular systems, as the tech elites are constantly revising expectations upward through endless promises of exponential improvements. The argument must be that no computerized system should replace the voice of voters. Do not ask if the machine can be trusted. Ask who controls them.

Authors

Eryk Salvaggio
Eryk Salvaggio is a blend of hacker, researcher, designer, and media artist exploring the social and cultural impacts of technology, including artificial intelligence. He is a 2025 visiting professor at the Rochester Institute of Technology's Humanities, Computing, and Design program and an instruct...

Related

At DOGE, Musk's Billionaire Interests Likely to Trump Public Interest

Topics