Home

Donate

The Technosolutionism Trap: The Risky Use of Tech by the Colombian Judiciary

Maia Levy Daniel / Apr 10, 2023

Maia Levy Daniel is a tech policy and regulation specialist and a research affiliate at the Center of Technology and Society (CETyS) at Universidad de San Andrés in Argentina.

Cartagena, Colombia. Felipe Ortega Grijalba/Wikimedia

Every day we see more and more uses of machine learning, artificial intelligence, and related technologies in numerous fields. Healthcare, education, and work are only a few of the many areas that are allegedly being improved by the implementation of these technologies. Now, the Colombian judiciary has joined this technosolutionist trend.

In one case (Case 1), in January 2023, a judge used ChatGPT to support his composition of a decision to waive medical fee payments for the treatment of a child with autism. In another case (Case 2), in February 2023, a judge conducted a hearing in the so-called 'metaverse,' and used ChatGPT to understand and explain how identities could be verified in the metaverse. This was a traffic-related case, in which the petitioner requested the use of a virtual venue for the hearing and the defendant agreed. The hearing was streamed on YouTube.

The use of technologies in these two cases raises several problems and questions–some of which I discussed in a previous piece for Tech Policy Press on the use of AI by the public sector. Here, I focus on two main critical topics, and analyze specific arguments provided by the judges in the Colombian cases, which reflect generalized concerns on the use of technology in the public domain.

1. How the public sector is using technologies and why

In Case 1, the judge asked ChatGPT questions such as the following: Is a child with autism exonerated from paying for his therapies? Should the acción de tutela (the judicial protection the mother was requesting to protect her child's fundamental right to access to health) be granted in this case? Does requiring payments in these cases imply a barrier to healthcare access? Has the Constitutional Court already decided favorably in similar cases?

Similarly, in Case 2, the judge asked ChatGPT the following: What is an avatar? What is the most effective method to verify the authenticity of those people participating in a virtual meeting or audience? What is the authenticity verification method for an avatar in the metaverse?

In both cases, the judges are using ChatGPT not as an aide to do their jobs, but to answer fundamental questions about how they should do their jobs. As stated in a recent piece on these cases, using generative AI in the judiciary could be harmful, since this technology "does not have the capacity to deeply understand contextual information and facts, which are the foundation of judicial discretion."

However, before diving into the many risks that may be involved, it is imperative to step back and address the many questions that precede the decision: Why did the use of technology seem necessary in these cases in the first place? What was the problem the judges were trying to tackle? Even more fundamentally, why were these judges asking ChatGPT how they should argue and decide their cases?

Moreover, the hearing in Case 2 was held in the metaverse. The participants needed to wear headsets–which, if necessary, were provided by the court to comply with the technical requirements to be able to use those headsets (or at least to join from the web). The platform they used was Meta's Horizon Workrooms–in which the judge, the petitioner, and the defendant appeared using slide presentations, replicating a courtroom. Thus, here the main question is: Was any of this necessary, when they could have used a video call platform with fewer technical requirements and the same valuable characteristics they assigned to immersive technologies? Virtual reality allows for the collection–by private companies–of much more and more types of personal data than a video call platform. Additionally, why did the judge choose Meta's Horizon Workrooms and not other alternatives? This is also relevant when a public actor decides which platform to utilize, since they are allowing a private company to collect, store, and use personal data–and, in the case of virtual reality, potentially sensitive data.

Two key issues seem to lie behind these decisions, particularly with regard to the use of ChatGPT. The first one is the belief that artificial intelligence systems are "intelligent," and that their decisions may be more objective and better than those made by a person. That is why the terms "artificial intelligence" or "machine learning" may be misleading, since they do not reflect the fact that ultimately there are often opaque human systems behind the technology and, in the case of large language models, substantial concerns about their reliability raised by the companies themselves. On its site, OpenAI highlights ChatGPT's limitations, such as "social biases, hallucinations, and adversarial prompts", and disallows its use for "high risk government decision-making."

The second issue relates to the use of technology as an indicator of success. It is common–particularly for public actors–to announce the implementation of technologies as a success in itself. Implementing AI or having a hearing in the metaverse might look sophisticated; this can be attested by the many news articles published in various parts of the world after the Colombian hearing in the metaverse, stating in some cases that Colombia was at the forefront in the region–and implying that this is a positive breakthrough for the country. This is even more problematic when we add the fact that we are referring to public actors and that they are using taxpayers' money that could be used for something else.

2. How public actors are justifying their decisions to use technologies

In both cases, judges mention that Colombia had passed laws and regulations allowing the use of technology in judicial processes. However, are these sufficient arguments for judges to use AI and immersive technologies for these particular purposes?

In Case 1, related to the right to access to healthcare of a child with autism, the justification for the use of ChatGPT was based on "optimizing the drafting of decisions." Still, the questions asked were not related to administrative aspects of the decision but to its core arguments. Although the judge highlighted that he would "previously review the information provided by artificial intelligence (systems)," having ChatGPT answering such sensitive questions is pretty risky, and may interfere with the fundamental rights of the parties.

Additionally, the judge in Case 2 mentioned that, as her court "supports the judiciary's digital transformation," she approved the development of the hearing in the metaverse. She also justified the use of the metaverse in this case by stating that it is a useful tool for people who are in different places–but she did not mention either where the parties were participating from in this case, or why a video call did not meet the necessary requirements.

- - -

These cases are problematic precedents, not only for Colombia but also for Latin America and the rest of the world. While writing this piece, I became aware of a judicial decision in Peru on March 27 in which the judge also used ChatGPT, but to solve a math calculation. Once again, the identification of the problem is crucial to understand if the use of certain technology would be a suitable solution.

Public actors must be transparent and provide exhaustive arguments on why they need to implement a specific technology. This is not to say that technologies are not useful for the work of judges; assistive tools–such as the use of generative AI for research or the drafting of written material–could be helpful and may address the case overload problem judges often face. However, due to the potential human rights risks as well as the special duties and responsibilities that judges have towards citizens, the judiciary cannot decide on its own when and how to use technologies. Thus, they must develop guidelines to explicitly determine in which cases technologies can be used and for what specific purposes, and these guidelines must be previously shared and meaningfully discussed with other relevant stakeholders.

Contrary to what the Colombian rulings imply, the procedure must be the other way around. The implementation of technologies in the public domain should not be considered until each and every question is comprehensively answered and the use of the technology is exhaustively justified.

Authors

Maia Levy Daniel
Maia Levy Daniel is a tech policy and regulation specialist. She is a research affiliate at the Center of Technology and Society (CETyS) at Universidad de San Andrés in Argentina and was Director of Research and Public Policy at Centro Latam Digital in Mexico, among other relevant positions in the f...

Topics