Home

Donate

AI as Double Speak for Austerity

Likhita Banerji, Damini Satija / Feb 7, 2025

In 1909, the Grand Palais in Paris hosted an exposition of aircraft, zeppelins, and balloons. Next week, it will host the Paris AI Action Summit. Wikimedia

As world leaders convene for the Paris AI Action Summit next week, they must confront a troubling reality: the rapid deployment of artificial intelligence (AI) in the public sector is exacerbating inequality, expanding mass surveillance, and violating human rights. In just the first few weeks of the year, we are already seeing how governments across the world are relentlessly pursuing projects that will advance greater automation of our lives. The UK government announced a huge public rollout of AI technologies, touting how AI will be ‘Mainlined into UK’s veins.’ A week later, the Trump administration revealed a staggering $500bn Stargate AI infrastructure project between OpenAI, Oracle, SoftBank, and MGX.

These investments are part of a larger trend, one where government-backed experimentation of AI infrastructure in the public sector is accompanied by crushing budget cuts to essential services such as housing, education, and healthcare.

While governments present these announcements as ‘efficiency solutions’, they increasingly go hand in hand with austerity policies and the deployment of data-intensive AI technologies that ultimately lead to exclusion, discrimination, and the entrenchment of corporate power.

In fact, such mass-scale public sector experimentations in AI are only possible because of corporate power that is deeply entangled in state tech infrastructure. This includes AI companies often lauding their technologies as transformative efficiency tools, which to governments also immersed in the language of efficiency is a deeply compelling sell.

However, recent events have shown yet again how shaky the promises of these companies are. For instance, DeepSeek’s launch threw the AI sector, pundits, and the stock market into disarray for disproving US companies’ claims on the scale of computational power, costs, and data required to build new generative AI systems. These same companies claimed that such computational power was the necessary means to achieve AI that would take us out of the climate disaster despite incurring massive environmental costs even to get there.

Wherever one may land on the DeepSeek news, it’s clear that the visions these companies sell sit on shaky ground, driven by business interests and are not grounded in scientific or societal interests. Public sector partnerships with such AI companies are a risky and callous bet on a mere idea of ‘efficiency’, often unproven and constructed at the expense of marginalized and low-income populations.

Enough warning bells have already been sounded about AI deployment in the public sector. From experts to the communities most impacted by the unchecked roll-out of AI technologies, research has time and again shown that without ironclad human rights protections built into the heart of the technological development and deployment process, the utopia some promise could all too easily descend into dystopia, creating more punitive conditions for those living in poverty.

For instance, in 2023, Amnesty International’s research in Serbia showed how poverty-stricken and marginalized communities are being pushed further into poverty as an automated welfare delivery system funded by the World Bank stripped them of social assistance. The automated system relied on erroneous data, which, coupled with a totally inadequate social protection system, amounted to a rights-violating and dysfunctional benefits delivery system.

Even in Denmark, the Danish welfare authority, Udbetaling Danmark (UDK)’s use of AI-powered algorithms for the detection of benefits fraud has led people to unwillingly–or even unknowingly– forfeit their right to privacy and created an atmosphere of fear and mass surveillance, particularly impacting people with disabilities, low-income individuals, migrants, refugees, and racialized communities to flag individuals for social benefits fraud investigations.

A similar welfare system in Sweden, used by the country’s Social Insurance Agency, unjustly flags marginalized groups for benefits fraud inspections. Additionally, a discriminatory risk-scoring algorithm used by the French Social Security Agency, which is used to detect overpayments and errors regarding benefit payments, treats individuals who experience marginalization – those with disabilities, lone single parents who are mostly women, and those living in poverty – with suspicion, assigning them higher risk scores and thus, disproportionately impacting them.

We must be supremely careful about how we navigate these uncharted waters, for there is nothing intelligent about letting technological leaps widen existing divisions, inequalities, and exclusions, as we’re already seeing. There is nothing intelligent about allowing machines to make decisions about who deserves to put food on their table. Governments and companies at the AI Action Summit must reckon with these harms and advance rights respecting AI regulation that curbs the rollout of the most harmful uses of AI. Systems that have already been found to create exclusion and inequality should immediately be rolled back.

At the Summit, those in power should advance a transformative agenda for change that truly prioritizes people and communities in the technological development process over the whims of corporations.

Authors

Likhita Banerji
Likhita Banerji is the Head of the Algorithmic Accountability Lab at Amnesty Tech.
Damini Satija
Damini Satija is the Director of Amnesty Tech.

Related

The Paris AI Action Summit: The Eighteenth Brumaire of Big Tech?

Topics