Home

Donate
Perspective

Brazil Is Preparing for Its First Real AI Election, But Is It Ready?

Andressa Michelotti, Tatiana Dourado / Mar 20, 2026

Brazilian President Luiz Inácio Lula da Silva speaks at the Korea-Brazil Summit
in Seoul on Feb. 23. (Republic of Korea)

This year is poised to be an inflection point in Brazil for efforts to regulate the use of artificial intelligence in elections, but there are lingering questions about whether a key court will be able to keep pace with the issue, which surged onto the scene over the past decade.

Much like the 2016 elections in the United States, 2018 in Brazil served as a tipping point for information manipulation, disinformation campaigns and influence operations. In both cases, bad actors leveraged digital platform infrastructure and its engagement and monetization logics to calibrate reach and visibility, shaping online trends, political perceptions and electoral attitudes.

In response to these digital threats, Brazil’s Superior Electoral Court issued specific resolutions in each electoral cycle since 2019 regulating online electoral advertising and the misuse of emerging technologies, while establishing transparency measures and cooperation mechanisms with digital platforms to safeguard electoral integrity.

In the years since, these resolutions have partially compensated for the absence of a permanent regulatory framework for Big Tech companies and placed a disproportionate regulatory burden on the Electoral Court. The court holds multiple responsibilities, including overseeing the electoral system and political parties, judging cases, enforcing rules and educating citizens.

While the Electoral Court remains a cornerstone of democratic oversight during electoral periods, its role in AI regulation raises pressing questions about whether electoral institutions are well equipped to govern emerging technologies.

In the past decade, the tech policy arena has shifted dramatically. However, nothing has been as ground-shaking as the rapid deployment of OpenAI’s ChatGPT in late 2022 and the risks that have emerged with the popularization of large language model tools (LLMs) — especially in regards to disinformation.

In 2024, during Brazil’s municipal election cycle, generative AI-related policies were included in an electoral resolution for the first time. Under these provisions, any use of AI technologies in political ads must be explicitly disclosed by advertisers. Yet the measures proved insufficient. The 2024 election was packed with misinformation generated by LLMs, deepfakes and altered political content.

Since then, the pace of AI development has accelerated sharply. The cost to run AI models has declined drastically, increasing the ability to deploy new tools. In the past two years, OpenAI and Google have launched important products integrating video and images such as Sora 2 and Nano Banana. Agentic AI has taken center stage with Claude Code, Gemini CLI, GPT-5 agents and Grok offering generative AI tools and autonomous agents that plan and execute multiple tasks.

In addition, AI chatbots around the world are in many cases displacing mental health professionals, counsellors, romantic partners and even friends. By July, 18 billion messages were being sent by 700 million users through ChatGPT on a weekly basis. This number represents around 10% of the global adult population.

While some argue that the current concerns about the effects of GenAI tools on elections are overblown, research also suggests that AI chatbots can shift attitudes with conversations that may persuade voters. In addition to the growing capacity to provide political insights, the threat posed by these tools includes incorrect outputs, hallucinations and the fast creation and dissemination of low-cost and high-quality misleading or false content.

Brazilians are broadly tech enthusiasts and early adopters of new tools. While this can serve as an ideal market for platforms seeking to expand their reach and to intensify their surveillance of consumers, history also demonstrates that these same products can affect the country's democracy, particularly during major elections.

In Brazil, ChatGPT is one of the most commonly used AI platforms. According to an 2025 OpenAI report, the country ranks among the top three in weekly ChatGPT usage, with around 140 million messages exchanged on the service per day. In addition, Anthropic's Economic index reports that Brazil is a market that deserves attention — one with significant potential in terms of AI penetration and usability.

With the 2026 presidential election on the horizon, Brazil's Superior Electoral Court has set the rules for platforms and political advertising for this year. The resolution introduces a few regulatory innovations, such as mandatory disclosure for AI use in political advertising on internet application providers systems a blackout period banning AI-generated content featuring candidates or public figures in the 72 hours before and 24 hours after the election ; and expanded platform liability for failure to remove unlabeled or non-compliant content during the electoral period.

The latest resolution defines “artificial intelligence” as follows:

A “computational system developed on the basis of logic, knowledge representation, or machine learning, obtaining an architecture that enables it to use input data from machines or humans in order to, with varying degrees of autonomy, produce synthetic content, predictions, recommendations, or decisions that meet a set of predefined objectives and are capable of influencing virtual or real environments.”

However, terms such as “generative AI,” “large language models,” “agents” or “chatbots” are not currently defined.

Although the resolution does not explicitly mention chatbots, it reflects an attempt to regulate conversational AI systems and major generative AI platforms by establishing that service providers offering AI tools must not rank, recommend or prioritize candidates, parties or coalitions; express opinions, show electoral preference or recommend votes; create or alter audiovisual content to include nudity or sexual scenes involving candidates; or produce electoral content that constitutes political violence against women.

While the Electoral Court historically has moved at a relatively fast pace by updating its enforcement every two years, the deliberative window for this initiative is narrow, leaving critical gaps.

The current resolution does not cover the threats posed by wearables, especially during voting times, such as the case of Meta’s smart glasses. It also excludes the role of AI companions in personal conversations. It also does not tackle how bad actors can make use of AI agents to automate malicious tasks, nor the role of what we refer to in Brazil as “mini-techs”— smaller technology companies offering AI services without local representation, whose origins are difficult to track and that tend to be involved with malicious services such as nudify apps and AI swarms

Despite input from academics, civil society and practitioners, the final resolution rests in the hands of a small group of judges who also sit on the Supreme Federal Court, granting them broader power over electoral issues and technological regulation. This dual mandate, combined with their limited technical expertise, raises serious questions about the Electoral Court's ability to keep pace with the complexities of AI.

This misalignment between electoral governance and technological governance tends to make the Brazilian model reactive, episodic and case-specific, leaving little room for action on systemic risks or for the adoption of preventative measures.

In the absence of a comprehensive AI regulatory framework, with certain exceptions, the Superior Electoral Court has become the de facto regulator of AI-related matters in Brazil. However, the Court’s jurisdiction is confined to electoral contexts, leaving broader AI governance issues largely unaddressed.

Certainly, the Superior Electoral Court plays a central role in safeguarding the core procedure of the democracy. Nevertheless, its ability to regulate emerging technologies is constrained, and the responsibility should not be solely under its purview.

Meanwhile, Brazil's broader AI regulation bill (No. 2338/2023) remains stalled in its lower legislative chamber, with no confirmed date for it to be taken up.

While President Luís Inácio Lula da Silva called for urgency in regulating AI, especially in light of the elections, the tech industry has been heavily lobbying against the bill, stymieing the progress of those efforts

This lack of a national AI law has left the Superior Electoral Court effectively to serve as Brazil’s top AI regulatory body, leaving the institution overburdened creating convoluted regulations at a time when AI development and its ensuing risks is most pressing.

Authors

Andressa Michelotti
Andressa Michelotti is a Political Science Ph.D. candidate at Universidade Federal de Minas Gerais (UFMG) in Brazil, Executive Secretary at the Counter-Disinformation Task Force (SAD, Sala de Articulação contra a Desiformação), researcher at Margem (Research Group on Democracy and Justice) and affil...
Tatiana Dourado
Tatiana Dourado is an Assistant Professor in the Department of Communication at the Pontifical Catholic University of Rio de Janeiro and Director of Digital Policies at the Democracia em Xeque Institute. She is also an associate researcher at the National Institute of Science and Technology for Digi...

Related

News
Brazil Wants to Reshape The Internet for Kids. The Hard Part Just Began.March 16, 2026
Analysis
Examining Brazil’s ‘Ecosystem Approach’ to Digital AntitrustFebruary 2, 2026

Topics