What the Debate Over Differing Approaches to Online Freedom of Expression in the US and Brazil is Missing
Fernanda Buril / Jul 18, 2025Dr. Fernanda Buril is the Deputy Director of the Center for Applied Research and Learning at the International Foundation for Electoral Systems (IFES).

Brasilia, Brazil—Brazil Supreme Court (Supremo Tribunal Federal—STF) at dusk. Shutterstock
Last month, Brazil’s Supreme Court decided to expand Big Tech’s liability for online content—a move that drew praise from President Lula and concerns from Google. Meanwhile, in the United States, an aggressive push for deregulation is underway, despite the last-minute removal of a controversial moratorium on state AI laws from a federal budget bill. Leaders on both sides claim to be protecting democracy with their pro or anti-regulation stances. Just last week, tensions between the two countries escalated when President Donald Trump announced new tariffs on the Latin American country citing, among other things, what he called the Brazilian Supreme Court’s “Censorship Orders to US Social Media platforms.”
Trump’s attempt to directly influence other countries’ regulatory decisions can undermine their ability to find solutions that best fit their contexts. With technology advancing so quickly and affecting the information environment so profoundly, we need to move on from the philosophical debate on whether constraints on online content are good and focus instead on identifying when and to what extent they are necessary. When they are needed, we need to ensure regulations are locally tailored, adaptable, and complemented by longer-term strategies that will ideally render them obsolete.
Two conflicting approaches to online freedom of expression
Although both the United States and Brazil have enshrined freedoms of expression in their constitutions, the countries seem to protect these rights in very different ways. Take, for instance, approaches to synthetic media in election contexts. Although, to date, more than 20 US states have passed or introduced legislation to regulate the use of AI-generated content— and particularly deepfakes—in election communications, there are no regulations at the federal level. In addition to disagreements between Democrat and Republican members of Congress on the topic, which hindered the passage of any bills, the courts have also consistently cited the First Amendment right to enable candidates to say almost anything they want in political advertising. In blocking a controversial Californian law expanding the timeframe to ban deceptive AI content targeting political candidates, the judge argued that “principles safeguarding the people’s right to criticize government and government officials apply even in the new technological age when media may be digitally altered.”
Going in a very different direction, Brazil’s Superior Electoral Court (Tribunal Superior Eleitoral, TSE) recently modified Resolution 23,610/2019 to address the use of AI in elections. In addition to explicitly allowing the use of AI-generated content in certain circumstances, the resolution imposes some restrictions—including banning deepfakes—and makes Big Tech companies responsible for not acting to remove content that violates the policy during the electoral period. The resolution also forces these companies to adopt and publicize measures they take to reduce dissemination of content that undermines electoral integrity.
The Brazilian Supreme Court also moved to expand Big Tech liability, making online platforms responsible for proactively taking down certain third-party content even without a specific judicial order to do so. To illustrate his argument in favor of the move, Brazilian Supreme Court judge Flávio Dino asked, “should anybody be allowed to build an airline and fly, with no regard to regulations, in the name of freedom of movement?”
Domestic political clashes with international ramifications
Despite efforts in Europe, the UK, Brazil and other countries to move in a different direction, the US posture toward online speech regulations since President Trump returned to the White House could reverberate across the world. In addition to the newly announced tariffs, US Secretary of State Marco Rubio suggested recently that figures such as Brazilian Supreme Court Judge Alexandre de Moraes would be sanctioned over what the White House regards as online censorship. If these sanctions do materialize, it is possible they will have a deterrent effect on other judges and decision-makers, in Brazil and around the globe. Moreover, this external influence might hinder each country’s ability to develop tailored regulations based on its society’s current needs.
Understanding a society’s needs
Regulations that work in one country might not work—or be needed—in others. And regulations that work now might not work—or, again, be needed—in the future.
Norway presents an interesting example. The country has a light regulatory framework that seems to be working so far, largely due to an already high rate of media literacy among the population. An expert group on artificial intelligence and elections appointed by the Norwegian government underlined that citizen trust in editor-controlled media can impact whether AI-generated media finds a foothold. Citing data from the Norwegian Media Authority, the report mentions that 73% of Norway’s population have “fairly high” or “very high” trust in Norwegian news media in general. Norwegians are also far more likely to go directly to the source of information (official news media websites and apps) and are less likely to use social media and search engines as sources of news. Trust in and reliance on reputable editorial media could thus minimize exposure and vulnerability to deceptive artificial content that is disseminated freely on social media.
In the US, however, Pew Research data suggest that trust in traditional news media is declining, while trust in social media sites is increasing. In Brazil, online media are already the main source of information for most people. In these circumstances, media users might not only be exposed to more deceiving content but might also be less likely to verify information against official sources, becoming more vulnerable to making ill-informed decisions—including on the ballot.
Preparing tomorrow’s citizens while protecting today’s voters
The anti-regulation argument can be appealing. Regulations are not fun. They incur costs, slow down processes, and trigger backlash from different—and powerful—sectors. In an ideal world, we would not need them. But as undesirable as those downstream consequences may be, they are less important than protecting citizens and mitigating real damage in the real world.
In cautioning against AI regulations, some experts have pointed to “smarter strategies,” including digital literacy and critical thinking programs. While there is no denying that these programs are needed, these skills cannot be built overnight. In some rare contexts, all or most content might run freely because people will have the knowledge and skills necessary to consider what is authentic and relevant and ignore noise and falsehoods. But this will not work for everyone, especially not in the short term while people are just getting familiar with the idea that something they might be seeing with their own eyes is not real. Today’s voters—many of whom are still largely vulnerable to deception by AI-generated content—need the opportunity to find their footing in this chaotic information environment, not least in order to pave the way for the emergence of a new generation of digitally literate and confident voters.
Constraints on content are not to be taken lightly, particularly given the power of controlling narratives on electoral and political outcomes. But decision-makers who consider the realities in their countries to develop reasonable regulations and also invest in longer-term, sustainable solutions—such as digital literacy—can chart a path that addresses the challenges of the day without compromising the democratic foundations they aim to protect.
Authors
