Home

AI Regulation in Latin America Requires a Thoughtful Process

Maia Levy Daniel / Jul 27, 2023

Maia Levy Daniel is a tech policy and regulation specialist and a research affiliate at the Center of Technology and Society (CETyS) at Universidad de San Andrés in Argentina.

Similar to what is happening in other regions, countries in Latin America are currently discussing how to deal with artificial intelligence (AI) and generative AI applications. For several reasons, the scenario is different from the one in the U.S. or Europe, but these technologies are already being used and developed, and alarms have been raised.

I previously shared a few thoughts for Tech Policy Press on the use of AI and emerging technologies by the public sector in the region–and, particularly, missteps in more than one judiciary. Here, I discuss some recent policy developments, and examine a few points made in a recent piece by Elisabeth Sylvan and Armando Guio Español at the Berkman Klein Center for Internet and Society, which offers recommendations to Latin American governments on the steps they should take around generative AI regulation.

Latin American legislators have introduced a variety of new measures related to AI in recent months. Perú recently passed a law promoting the use and development of AI to improve public services, and any economic and social activity at the national level, and declaring its use of national interest. In Costa Rica, a group of legislators recently presented a bill on AI regulation–which, ironically, was written by ChatGPT. Finally, in Argentina, the national government published a document entitled "Recommendations for a trustworthy AI", aimed at providing the public sector with theoretical and practical tools to develop or implement AI systems. This document highlights that AI could "make government management more efficient and improve the design and implementation of policies and the delivery of essential services in health, education, security, transportation, environmental care, etc." According to the document, "governments can also use AI to improve communication and engagement with citizens."

In this context, we should first agree on a shared baseline in order to start working on any AI regulations in the region. According to Sylvan and Guio Español, "the narrative about generative AI has been, on average, negative, typically focusing on how tools will replace humans." Although that might be the case among specialists in specific areas, this statement does not seem to apply either to the private or the public sector. In particular, in the public sector, governments have been using AI and ChatGPT to automate several processes–the Costa Rican bill on AI regulation that was created with ChatGPT is a relevant example–and publicizing its use was regarded as an achievement. Judges have already used ChatGPT in various rulings and for different purposes in Colombia, Bolivia, Mexico, and Peru. The narratives the judges employed in those cases were not negative; on the contrary, they seemed keen to advertise ChatGPT as a useful, innovative, and necessary tool. Thus, if we want to move forward on recommendations around the use of generative AI in the region, it is crucial to work on a shared diagnosis, considering the main differences between countries, sectors, and issue areas.

In the second place, at least for legislative and policy purposes, Latin America should not be envisaged as a whole. Countries in the region differ on the resources they have, their economic situation, and their relative geopolitical power–all of which determine their main priorities and dynamics–so recommendations should not be overly generalized. Currently, Latin American countries are at different stages of development with regard to AI regulations. Some of them have already published their AI strategies and are working on AI regulations, others only have nascent AI strategies and may or may not have developed action plans, and some do not even have a strategy. In addition, maturity of democracies and respect for fundamental rights vary throughout the region, so trust in the government to promote ethical and human rights-respecting uses of generative AI will vary as well.

A challenge similar to this–but at the global level–is currently being faced by civil society organizations in Latin America and Asia-Pacific that are pushing back against UNESCO's digital platforms regulation guidelines for not taking into account regional concerns. As they mention, generalized proposals may entail the risk of justifying increased regulation rather than better regulation, and could incentivize authoritarian national regulations. In the Latin American scenario, it would be necessary to identify and understand the local realities in order to attain feasible and effective recommendations, including human rights safeguards that avoid legitimizing any regulation–particularly if, as suggested by Sylvan and Guio Español, "AI tools need to be vetted." As stressed by civil society organizations, human rights impact assessments are necessary to ensure that processes and contents are aligned with human rights standards, and that every perspective is meaningfully considered.

Moreover, whereas it would be efficient for the region to have countries collaborating in the development of policies and legislation, as suggested by Sylvan and Guio Español, evidence shows that this may be problematic. For instance, in Latin America, we have seen over the past years how a few countries have approved and implemented hate speech laws that do not comply with human rights standards and restrict the right to freedom of speech, and a few countries even passed criminal laws that have been used to criminalize speech for censorship purposes–such as in Venezuela, El Salvador, and Nicaragua. And something similar has happened in Europe, for example, with online content regulations implemented by authoritarian countries that were based on problematic aspects of the German NetzDG. So, promoting collaboration and learning from each other may be risky when we cannot ensure an alignment with ethical and human rights standards.

Finally, Sylvan and Guio Español suggest the creation of an AI alliance in Latin America to bring together "governments and other organizations to develop a shared agenda." Would an alliance be the most adequate way of addressing any concerns? And what are exactly the concerns this alliance would address and the expected outcomes? Would governments meaningfully participate and collaborate with other stakeholders? How can we ensure that governments in this alliance will advocate for a human rights-respecting approach? What's more, do we need governments to be part of this alliance?

For an initiative like this to work, governments must meaningfully engage and provide efforts that need to be sustained over time. In Latin America, it is common to find commitments and policies completely disregarded once the administration changes–such as what happened with Argentina's AI strategy, which was created and published by an administration at the end of its mandate and completely ignored by the administration that followed. An interesting alternative may be to bring together relevant stakeholders–there are many who have already developed research and tools on the use of AI in the region–to collaborate and advise governments on AI regulation when necessary, taking into account each local context and needs.

The answers to all these questions may lead, after all, to building an AI alliance; however, as we know at this point, it is pretty difficult and costly to build these processes and spaces–let alone to make them meaningfully participatory, inclusive, and collaborative. Hence, all these questions should be posed, widely shared, and discussed among the relevant stakeholders in the region in order to understand what we need to guarantee ethical and rights-respecting policies and regulations.

Whereas it is clearly necessary to ensure that AI development and use in the region respect human rights and ethical standards, proposals should not be hurried. As Sylvan and Guio point out, inaction may deepen existing problems, but responses need to be proportional to the real risks to the region and not reactive to hype and fear. That is why we need to be able to assess what is happening in each country and take the necessary precautions before making any recommendations on AI regulations. Although it takes time, successful regulations must start with a thorough analysis of local realities and possibilities, without overlooking potential unintended harms to democracy.

Authors

Maia Levy Daniel
Maia Levy Daniel is a tech policy and regulation specialist. She is a research affiliate at the Center of Technology and Society (CETyS) at Universidad de San Andrés in Argentina and was Director of Research and Public Policy at Centro Latam Digital in Mexico, among other relevant positions in the f...

Topics