Home

Automated Decision-Making in Government and the Risk to Democratic Norms

Maya Recanati / Jul 1, 2024

Emily Rand & LOTI / Better Images of AI / AI City / CC-BY 4.0

In 2015, the Australian government introduced an automated debt collection program to curb alleged overpayments in welfare. The program relied on an algorithm that compared annual pay information from the national tax office to income data reported to the country's social services platform, Centrelink. Six years later, the system incorrectly flagged over 381,000 people, and the government found itself in the middle of a class-action lawsuit in which the judge determined that the program represented a “shameful chapter” in Australia’s history, responsible for “financial hardship, anxiety, and distress.”

Australia’s failed experiment with its so-called robo-debt scheme represents but one example of governments’ increasing reliance on and trust in automated decision-making (ADM) to deliver crucial social services. The challenges of automated decision-making in wealthier countries like Australia and the Netherlands are well-documented and well-known among digital rights watchers. However, less public attention has been paid to the global diffusion of these systems, often with the support of Western companies and large donors, to settings where democratic guardrails may be absent or under strain.

Governments from Jordan to Serbia have adopted digital tools that automate crucial tasks in the name of boosting efficiency and accuracy. The proliferation of artificial intelligence (AI) has further accelerated this trend by providing governments with the ability to process and analyze data necessary for ADM systems quickly and inexpensively. Integrating automated decision-making into social assistance programs can potentially aid under-resourced caseworkers and increase access to benefits.

However, programs deploying ADM also provide a misleading illusion of objectivity and are riddled with inaccuracies that may amplify preexisting social biases, jeopardize privacy protections, and constrict the delivery of social services, among other risks. As ADM becomes increasingly common, establishing principles around transparency and accountability for digital tools that control access to government services is more critical than ever.

Automated Decision-Making Defined: Risks and Benefits

Definitions of automated decision-making vary. At its core, the term refers to tasks performed by a machine or technology designed to augment or replace human decision-making. On the surface, ADM allows governments to streamline operations, for example, by automating routine tasks. In practice, however, ADM systems have a track record of exacerbating discrimination against marginalized groups by relying on data that reflects pre-existing real-world inequities, undermining vital democratic and human rights protections.

In one instance, a controversial Polish unemployment assistance program categorically classified single mothers as the least “employable,” jeopardizing their eligibility for assistance. In 2018, Poland’s Constitutional Court ruled that the program violated the country’s constitution, and the government declared its intent to disband it.

Discriminatory decision-making often stems from imprecise training data that lacks crucial historical and cultural context. Take the World Bank-initiated Takaful program in Jordan. The initiative sought to use ADM to distribute poverty-relief cash transfers to those most in need, but ended up disqualifying potential recipients based on inaccurate indicators of poverty, such as electricity usage. The algorithm did not account for the fact that poorer families may consume more energy because they don’t have access to newer, energy-efficient appliances.

The discriminatory risks of ADM are particularly pronounced when used in a predictive capacity, as biased algorithms can reach inaccurate, misleading, and discriminatory conclusions about vulnerable populations. In 2018, the Argentine province of Salta partnered with Microsoft to develop an algorithm that sought to identify girls “predestined” for teen pregnancy and conduct subsequent outreach to those singled out—although it remains unclear what follow-up occurred. The algorithm based its predictions on several factors, such as ethnicity, country of origin, and access to hot water, but failed to account for the local and historical context that shaped the system’s outputs. Ultimately, the program primarily profiled poor, minority girls and ignored crucial factors that influence teen pregnancy rates, such as access to sex education and contraception.

Many automated decision-making programs also compel potential recipients to relinquish their privacy rights, protected under Article 12 of the Universal Declaration on Human Rights (UDHR). This breach of digital privacy exacerbates socioeconomic stratification and creates a system in which only the wealthy are entitled to this fundamental democratic right. ADM systems collect a myriad of personal information to build “extensive profiles” on individuals to determine welfare eligibility, often with little oversight. In one instance, the South African Social Security Agency partnered with private company Cash Paymaster Services to deliver social services. The company required potential beneficiaries to register with biometric information, raising concerns about the processing of personal data and ultimately leading the South African government to back out of the contract.

As welfare systems become increasingly automated and digitized, access to benefits has often become contingent on invasive digital IDs, raising concerns that service delivery is being used as a tool to advance state surveillance. Kenya’s digital ID system requires people to hand over an excessive amount of biometric information, including a person’s fingerprints, hand geometry, earlobe geometry, retina and iris patterns, voice waves, and digital DNA—those who choose not to register risk forfeiting their right to social services. In Venezuela, individuals cannot access state benefits without having the ZTE-created homeland ID, which the government uses to track activity such as voting history and social media activity.

In addition, while governments and international institutions like the World Bank have touted ADM as a way to improve the delivery of services to those most in need, the ways in which these systems are designed may actually decrease access to benefits.

ADM systems developed for social assistance programs are often designed to detect fraud. In reality, welfare fraud is often overstated, and programs regularly incorrectly flag potential beneficiaries, which may disqualify them from receiving aid, usually with little recourse or remedy. For example, a Dutch tax authority algorithm introduced in 2012 incorrectly accused more than 20,000 families of childcare benefit fraud. In a scathing report on the spread of digital welfare, the former UN Special Rapporteur on extreme poverty and human rights wrote that the automation of social service delivery has constricted funding for social assistance and could contribute to a “digital dystopia.” When governments introduce ADM into social service delivery, it can result in the exclusion of certain beneficiary groups and the elimination of services for vulnerable populations.

A lack of transparency in the design of ADM tools may also allow governments to evade accountability for potentially discriminatory and exclusionary systems. ADM programs are often developed in partnership with the private sector, which argues that sharing information about ADM systems would raise intellectual property concerns. This was the case in Serbia, where the country’s Ministry of Labor repeatedly denied civil society organizations’ Freedom of Information requests. In describing the protections afforded to private sector entities, technology researcher Shehla Rashid writes, "while it is people who should have a right to privacy and data sovereignty, it is algorithms and data processing mechanisms and practices that are accorded secrecy.” This opacity prevents civil society organizations from exposing and holding governments accountable for how these systems function, allowing them and partner tech companies to operate in a de facto “human rights-free zone.”

Toward More Transparent and Accountable ADM Tools

As digital systems to deliver social services proliferate, fundamental democratic and human rights principles such as transparency, accountability, and privacy should guide the design and use of automated decision-making systems. Civil society organizations can push for transparency, for example, by filing Freedom of Information requests to learn more about the technology and from whom it was procured. Using existing transparency tools can be a powerful counterweight to government obfuscation and can elucidate the role ADM plays in constricting access to benefits.

Digital rights groups can also help design independent, impartial algorithmic impact assessments (AIAs) to determine the risks of integrating ADM into social service delivery. As Krzysztof Izdebski explains in a 2023 International Forum for Democratic Studies publication, AIAs help expose poorly planned digitalization projects, which is particularly important in swing states or fragile democracies where such tools “may further erode political accountability where it is already under threat.” Proactively mitigating the potential harms of ADM on vulnerable and marginalized groups is an important step toward ensuring that automation serves the people and builds public trust in state institutions.

Additionally, pushing governments for transparency and accountability will require the public to recognize the risks of ADM tools. Civil society can launch educational campaigns to inform citizens of the benefits and risks of automated decision-making and provide them with the necessary tools to advocate for systems designed with human rights principles such as non-discrimination in mind.

While the risks ADM systems pose are significant, the automation of government processes has the potential to expand the delivery of essential state services. With the proper safeguards in place, digitalization can fit into a positive vision for tech-enabled democracy and even be an asset to democratic governments and the public. However, such a reality can only emerge from a steadfast commitment to defend fundamental human rights principles and democratic norms.

Authors

Maya Recanati
Maya Recanati is a program assistant at the National Endowment for Democracy’s International Forum for Democratic Studies, where she supports the emerging technologies and information space integrity portfolios. Previously, she worked as a Privacy Program and Policy Analyst at Venable Blue, helping ...

Topics