Home

Donate

How Does the Public Sector Identify Problems It Tries to Solve with AI?

Maia Levy Daniel / Jul 5, 2022

Maia Levy Daniel is a research affiliate at the Center of Technology and Society (CETyS) at Universidad de San Andrés in Argentina.

Nowadays, no matter where you're reading this from, it seems that every problem that we face can be solved with artificial intelligence (AI). From banal applications such as getting recommendations on shows and films to more fraught ones such as identifying criminals or hiring personnel, AI systems can purportedly address any concern.

Arguably, AI could make decisions faster and more objectively than human beings. However, as experts understand by now, AI is not neutral and its implementation may imply potential risks for human rights, such as the right to privacy and to personal data protection, the right to equality, and the right not to be discriminated against.

The public sector has a similar appetite for "technosolutionism.” Many governments around the globe are implementing AI systems in their various processes in order to automate tasks and decision-making and, in theory, reach better and more efficient outcomes. According to a study conducted by KPMG in 2021, 79% of government decision-makers believe that AI will improve bureaucratic efficiency.

Let's take an example of AI implementation from real life. A national government decides to implement AI to automate the process of finding jobs for citizens. In this case, the AI system is aimed at matching workers with vacancies as well as at profiling workers within its databases according to their capacity to be re-employed. As a result of this implementation, concerns and questions have been raised by civil society organizations: How are people's data stored and protected? Can workers have access to a justification of how decisions are being made? Is anyone thinking about the human rights that could be affected?

In such a scenario, there are two fundamental questions when deciding whether to implement AI: 1) Is the problem being correctly identified? and 2) is the solution found the best one to address the particular problem? There are notable examples where these basic questions might have helped avoid problematic applications. For instance:

  • The 'visa streaming' algorithm implemented by the United Kingdom (UK) Home Office to filter visa applications. In 2020, five years after its launch, the government was required by a court to suspend its use due to concerns relating to unconscious bias and racism.
  • An algorithm implemented by the Italian government to evaluate and enforce mobility requests from teachers in the public education system without human oversight. The algorithm ended up assigning teachers to wrong professional destinations and resulted in complaints by around 10,000 teachers.
  • In Chicago, a man was jailed for allegedly killing a man based on evidence produced by ShotSpotter, a system that uses surveillance microphones and algorithmic analysis to identify the sound of gunfire. The man, Michael Williams, was released after 11 months in prison after prosecutors concluded the evidence from the system was insufficient to meet the burden of proof. According to the Associated Press, “the algorithm that analyzes sounds to distinguish gunshots from other noises has never been peer reviewed by outside academics or experts.”

These examples should give pause to anyone in the public sector considering new technology. A correct analysis of the implementation of AI in a particular field or process needs to start by identifying if there actually is a problem to be solved. For instance, in the case of job matching, the problem would be related to the levels of unemployment in the country, and presumably addressing imbalances in specific fields. Then, would AI be the best way to address this specific problem? Are there any alternatives? Is there any evidence that shows that AI would be a better tool? Building AI systems is expensive and the funds being used by the public sector come from taxpayers. Are there any alternatives that could be less expensive?

Moreover, governments must understand from the outset that these systems could involve potential risks for civil and human rights. Thus, it should be justified in detail why the government might be choosing a more expensive or riskier option. A potential guide to follow is the one developed by the UK's Office for Artificial Intelligence on how to use AI in the public sector. This guide includes a section specifically devoted to how to assess whether AI is the right solution to a problem.

AI is such a buzzword that it has become appealing for governments to use as a solution to any public problem, without even starting to look for available alternatives. Although automation could accelerate decision-making processes, speed should not be prioritized over quality or over human rights protection. As Daniel Susser argues in his recent paper, the speed at which automated decisions are reached has normative implications. By incorporating digital technologies in decision-making processes, temporal norms and values that govern them are impacted, disrupting prior norms, re-calibrating balanced trade-offs, or displacing automation's costs. As Susser suggests, speed is not necessarily bad; however, "using computational tools to speed up (or slow down) certain decisions is not a 'neutral' adjustment without further explanations."

So, conducting a thorough diagnosis including the identification of the specific problem to address and the best way to address it is key to protecting citizens' rights. And this is why transparency must be mandatory. As citizens, we have a right to know how these processes are being conceived and designed, the reasons governments choose to implement technologies, as well as the risks involved.

In addition, maybe a good way to ultimately approach the systemic problem and change the structure of incentives is to stop using the pretentious terms "artificial intelligence", "AI", and "machine learning", as Emily Tucker, the Executive Director of the Center on Privacy & Technology at Georgetown Law Center announced the Center would do. As Tucker explained, these terms are confusing for the average person, and the way they are typically employed makes us think it's a machine rather than human beings making the decisions. By removing marketing terms from the equation and giving more visibility to the humans involved, these technologies may not ultimately seem so exotic.

Governments must understand that technology won't solve each and every problem. Before even thinking about applying AI, leaders need to justify if there is an actual problem, what available solutions there are, why AI could be the best alternative, and how this solution is protective of human rights. Taking into account the many risks involved, not until all these questions are posed and answered should governments move forward to consider implementing AI in their decision-making processes.

Authors

Maia Levy Daniel
Maia Levy Daniel is a tech policy and regulation specialist. She is a research affiliate at the Center of Technology and Society (CETyS) at Universidad de San Andrés in Argentina and was Director of Research and Public Policy at Centro Latam Digital in Mexico, among other relevant positions in the f...

Topics