AI Can’t Solve Government Waste – and May Hurt Vulnerable Americans
Kevin De Liban, Alice Marwick / Dec 10, 2024Kevin De Liban is the founder of TechTonic Justice. Alice E. Marwick is director of research at Data & Society.
President-elect Trump has tasked the Department of Government Efficiency (DOGE) with cutting $2 trillion from the government budget. Its co-chairs, Elon Musk and Vivek Ramaswamy, speak about deleting entire agencies, drastically reducing regulations, and downsizing three-quarters of federal workers. But this tired–and false–trope of outsized bureaucracy now has a modern twist: remaking government through artificial intelligence (AI).
Though details are sparse, the plan peddled by DOGE and influential conservative think tanks appears to use “advanced technology”--meaning AI–to first identify staff to be cut and, then, once they’re gone, to perform the functions the humans once did. And, with President Trump and the incoming Congress actively eyeing cuts to benefit programs like Medicaid, the Supplemental Nutrition Assistance Program (SNAP), veteran’s healthcare, and federal housing assistance, AI could be pushed as a way to reduce costs by restricting benefits and cutting government staff needed to administer the programs.
DOGE is part of a larger political project to wither the government itself. Diminishing the ability to perform basic governmental tasks will lead to decreased public trust, in turn justifying further cuts and inviting the expansion of the private sphere. Such a shift entails suffering, particularly for low-income people, people with disabilities, and other groups whom society marginalizes and who cannot afford the added costs of surviving where government is minimized. And it degrades the notion that there is a collective good that we should all contribute to and benefit from. In light of existing polarization, we need to strengthen our collective trust in institutions, not weaken it.
AI fits this project all too well. It promises quick solutions to complicated problems. It is sold as a way to replace government staff, make the remaining staff more productive, and make decisions about what services will be available to the public and on what terms. Its use is largely unrestrained by laws. And, with its veneer of sophistication, AI inspires unwarranted deference to its decisions. But the idea to use AI to cut government is as bad as it is blustering.
First, core governmental functions are actually underfunded, as key federal agencies have yet to recover from Obama-era sequestration or the first Trump administration. Employment in many agencies has precipitously declined. Expertise has been lost due to layoffs or resignations and cannot easily be recovered with newer staff. In many cases, the loss of personnel is mirrored by a lack of sufficient programmatic investment. As a result, infrastructure is failing, public housing projects become unlivable, Social Security offices are closing and cannot process benefits applications quickly, and the IRS can’t enforce tax laws against wealthy individuals with sophisticated tax evasion schemes.
Second, AI has a disastrous track record in high-stakes governmental uses. A just-released report by TechTonic Justice recounts the lowlights. When AI was used to determine eligibility for public benefits, thousands of disabled adults in Arkansas faced drastic cuts to in-home caregiving services that they depend on to stay out of nursing facilities, nearly 40,000 people in Michigan were falsely accused of fraudulently claiming unemployment benefits, and the state of Rhode Island was largely unable to provide SNAP benefits to eligible residents. In K-12 education, the sheriff’s department in Pasco, Florida, using student data provided by the school district, claimed AI could predict future criminal behavior of high school students. Deputies then proceeded to terrorize hundreds of students and their families based on nothing more than the prediction–there was no proof that the students had violated any laws. Summed up, AI systems are generally opaque, difficult to contest, and susceptible to bias and error. They also can’t be relied upon to provide accurate information, as they routinely produce false or made-up results.
And Musk and Ramaswamy’s industry expertise doesn’t make their AI efforts more likely to succeed. The Economist recently reviewed available data about the private sector use of AI and found that it had no meaningful impact. Relatively few companies adopted it, and of those that did, many were forced to abandon pilot projects after AI failed to perform. Stocks of AI-benefiting firms lagged behind the market, worker productivity stayed flat, and labor markets saw zero AI-driven upheaval. All told, private industry AI usage suggests it is unlikely to achieve a grand scale of efficiency. And integrating AI into government would just make the public sector more dependent on the wealthy tech companies that own it.
We have already seen the harms that AI inflicts on vulnerable communities. Using “government efficiency” as an excuse to replace human decision-makers with poorly designed systems will only compound these harms, rendering opaque and uncontestable verdicts that deeply impact people’s lives. To anyone who believes that public institutions should actually serve the public, this is an unacceptable bait-and-switch.