Home

Donate
Perspective

How Algorithmic Systems Automate Inequality

James O’Sullivan / Jan 16, 2026

Emily Rand & LOTI / AI City / Better Images of AI / CC-BY 4.0

The deployment of predictive analytics in public administration is usually justified by the twin pillars of austerity and accuracy. Governments and private entities argue that automated decision-making systems reduce administrative bloat while eliminating the subjectivity of human caseworkers. For example, the Dutch government explicitly framed its automated childcare benefits fraud system as a means of replacing discretionary human judgment with objective, data-driven risk scoring. But the operational reality of these systems suggests a different outcome. Rather than removing bias, they frequently operationalize it, embedding historical inequities into the seemingly neutral architecture of code. For marginalized communities, the threat is not that the technology will fail, but that it will work exactly as designed, scaling up the systemic discrimination that policy is supposed to mitigate.

This dynamic is clearest in the digitization of the welfare state. When agencies turn to machine learning to detect fraud, they rarely begin with a blank slate, training their models on historical enforcement data. Because low-income and minority populations have historically been subject to higher rates of surveillance and policing, these datasets are saturated with selection bias. The algorithm, lacking sociopolitical context, interprets this over-representation as an objective indicator of risk, identifying correlation and deploying it as causality.

The fallout from this logic was evident in the aforementioned Dutch childcare benefits scandal, or toeslagenaffaire, where tens of thousands of families were wrongly accused of benefit fraud. The system’s design penalized dual nationalities and low incomes, treating them as proxies for deceit. This is a prime example of a statistical model executing a narrow directive to maximize financial recovery for the state without a countervailing directive to preserve due process for the public. The state benefited when the accusations were correctly placed, but when the system got it wrong, all of the harm fell on those citizens who were falsely accused.

A similar asymmetry plagues the private sector, particularly in the labor market. The widespread adoption of algorithmic hiring tools promised to democratize recruitment by blinding selectors to demographic markers, but the ongoing litigation in Mobley v. Workday in the United States highlights the limitations of “blind” algorithms. If a model is optimized to select candidates who resemble current top performers, and the current workforce is homogeneous, the algorithm will penalize applicants with non-traditional backgrounds. It does not need to know a candidate’s race or disability status to discriminate against them; it only needs to identify data points—zip codes, gaps in employment, or specific linguistic patterns—that correlate with those protected characteristics.

This creates a significant challenge for existing liability frameworks. In Europe, discrimination law distinguishes between direct discrimination and indirect discrimination, the latter arising where a seemingly neutral rule or practice disproportionately disadvantages a protected group without requiring proof of intent. Algorithmic discrimination, however, is diffuse and difficult to contest. A rejected job applicant or a flagged welfare recipient rarely has access to the proprietary score that disqualified them, let alone the training data or the weighting variable—they face a black box that offers a decision without a rationale. This opacity makes it nearly impossible for an individual to challenge the outcome, effectively insulating the deploying organisation from accountability. The burden of proof rests on the party with the least information.

Regulators, to their credit, are attempting to catch up. The EU’s AI Act and emerging US state laws introduce requirements for impact assessments and transparency, but enforcement remains a theoretical exercise. Most current auditing methods focus on technical fairness metrics, which essentially amounts to adjusting the calculations until the error rates look balanced across groups. This technical formalism often misses the broader point, which is that a system can be statistically “fair” and still be predatory if the underlying policy objective is punitive. Calibrating an algorithm to flag poor families for fraud investigation at the same rate across racial lines does not solve the problem if the premise—mass surveillance of the poor—is itself wrong.

Policy interventions must move beyond the optimization of error rates and formal parity measures. Treating fairness as a matter of calibrating statistical outputs assumes that the primary problem lies in mismeasurement, rather than in the social and institutional conditions that produce the data in the first place. Algorithmic systems do not observe the world directly; they inherit their view of reality from datasets shaped by prior policy choices and enforcement practices. To assess such systems responsibly requires scrutiny of the provenance of the data on which decisions are built and the assumptions encoded in the variables selected.

This is particularly evident in the widespread use of proxies, measurable stand-ins for complex social attributes that are difficult or politically sensitive to state directly. Variables such as length of phone calls, frequency of address changes, social network density, or patterns of online interaction are often presented as neutral indicators of a person’s reliability or trustworthiness, when in practice, they function as indirect measures of class, precarity, migration status, disability, or care responsibilities. Because these attributes are unevenly distributed across society, proxy-based systems systematically disadvantage those whose lives do not conform to the behavioral norms of the datasets against which they are assessed.

The essential problem is that these proxies reduce complex social realities to fragments of behavioral data, stripping away context and interpretation. A short phone call may indicate limited credit, linguistic difference, or time pressure rather than evasiveness, while sparse digital networks may reflect age, poverty, or deliberate disengagement from online platforms rather than social isolation or risk. When such signals are elevated to decisive indicators, the system engages in a form of digital phrenology, inferring character and future behavior from surface traces that bear only an incidental relationship to the qualities being judged.

These inferential shortcuts disproportionately harm individuals with non-standard or precarious digital footprints, those whose lives are shaped by irregular work, informal care, migration, or limited access to infrastructure. These are often the same populations already subject to heightened scrutiny by welfare agencies and public institutions. Algorithmic systems compound disadvantage by treating deviation from a statistical norm as evidence of risk, while presenting this judgment as the outcome of objective analysis rather than a choice embedded in system design.

The danger of these systems lies in their ability to launder political decisions through technical processes. By framing allocation and enforcement as engineering problems, institutions can sidestep the moral and legal scrutiny that usually accompanies policy changes. For the communities at the sharp end of these decisions, the result is a hardening of social strata, enforced by a bureaucracy that is harder to see, harder to understand, and significantly harder to fight.

Authors

James O’Sullivan
James O’Sullivan is Senior Lecturer in Digital Humanities at University College Cork. His writing has appeared in The Guardian, Noema, The Irish Times, Irish Examiner, and the LA Review of Books, among other publications. He is the author and editor of several books, including Towards a Digital Poet...

Related

Perspective
When Algorithms Learn to Discriminate: The Hidden Crisis of Emergent AbleismJuly 25, 2025
Perspective
Austerity IntelligenceJune 4, 2025

Topics