Home

A Review of Content Moderation Policies in Latin America

Mateo García Silva, Maria Fernanda Chanduvi / Jul 8, 2024

Mateo García Silva and Maria Fernanda Chanduvi were Georgetown University McCourt School of Public Policy, Tech and Public Policy Fellows with Tech Policy Press in Spring 2024.

As digital platforms increasingly influence public discourse, managing online content—from removing illegal and harmful material to suppressing misinformation—presents complex challenges. Decisions about content moderation can significantly impact various aspects of society, including political dynamics, personal safety, and cultural norms.

This article examines content moderation policies in various Latin American countries. Unlike the EU, which has a robust regulatory approach exemplified by the Digital Services Act (DSA) and Digital Markets Act (DMA), Latin America has a diverse and evolving landscape of digital regulation. In this region, countries are grappling with their unique challenges, often influenced by differing political climates, legal frameworks, and levels of technological development.

A Regional Discussion

In Latin America (LATAM), the balance between regulating content and protecting free speech is a dynamic interplay influenced by national sovereignty and regional regulatory frameworks. Regional regulation also significantly shapes content moderation policies in the EU, where supranational governance structures coexist with national sovereignty. The EU has implemented legislative measures such as the DSA and the General Data Protection Regulation (GDPR) to address online content issues while respecting the diversity of member states' legal traditions and cultural values. These regulations establish common standards for content moderation, including obligations for platforms to remove illegal content promptly while ensuring transparency and user rights.

However, the EU's regulatory framework must balance harmonizing regulations across member states and respecting national sovereignty. Member states retain authority over some regions of content regulation, such as hate speech laws and national security concerns, leading to variations in enforcement practices and legal interpretations. This tension between regional harmonization and national sovereignty underscores the complexity of balancing content regulation with free speech protections in the EU.

Similarly, in Latin America, where diverse political systems and legal traditions prevail, national sovereignty and regional cooperation considerations influence the balance between regulating content and protecting free speech. While some countries in LATAM already discuss comprehensive legal frameworks to govern online content, others prioritize national sovereignty over a common regulation for content moderation practices. Factors such as political instability, limited resources, and historical contexts of censorship may contribute to the absence of digital regulations in certain countries.

Some have implemented comprehensive legal frameworks addressing various aspects of online content, while others have limited regulations. For example, countries like Brazil and Mexico have relatively advanced digital laws and regulations, with specific provisions addressing data privacy, cybersecurity, and online content moderation. Brazil's Marco Civil da Internet, enacted in 2014, is a landmark legislation establishing principles, rights, and obligations for internet users and service providers. Similarly, Mexico's Ley Federal de Telecomunicaciones y Radiodifusión includes internet regulation and incipient content moderation provisions.

However, this decentralized approach can pose challenges for effectively addressing transnational issues such as hate speech, misinformation, and illegal content. This lack of uniformity poses unique challenges for content moderation efforts in the region and has significant implications for content moderation efforts in the region. Platforms operating across multiple countries must navigate a complex patchwork of regulations, which can lead to difficulties in enforcement. Regional initiatives, such as the Inter-American Commission on Human Rights' Rapporteurship on Freedom of Expression, provide a forum for dialogue and cooperation among Latin American countries on free speech and content moderation issues.

The Organization of American States (OAS)

Despite lacking a common legal framework, the region has an emerging dialogue about the need for more harmonized digital policies. Initiatives such as the eLAC Digital Agenda, a periodically updated action plan endorsed by multiple Latin American and Caribbean countries, aim to foster digital development and include discussions on harmonizing digital policies. While these initiatives are still nascent regarding content moderation, they represent a potential starting point for regional cooperation.

As the most relevant regional organization in the Western Hemisphere, the Organization of American States (OAS) takes a comprehensive and protective stance on content moderation, which is critical in addressing the diverse and evolving digital landscape across Latin America. The OAS supports the safeguarding of freedom of expression as a fundamental component of democratic societies, as reflected in various legal instruments and declarations.

First, Article 13 of the American Convention on Human Rights establishes a robust framework for freedom of thought and expression. It affirms the right of individuals to seek, receive, and impart information without undue interference, explicitly prohibiting prior censorship. This right, however, is balanced with the capacity for subsequent liability to protect the rights or reputations of others or to uphold public order, health, or morals.

The Convention restricts indirect forms of censorship, such as excessive control over media tools, and allows for minimal conditions under which prior censorship might be justified, such as protecting minors in public entertainment. Additionally, it criminalizes incitements to war and hate speech based on national, racial, or religious grounds, ensuring that content moderation practices conform to legal standards and human rights protections. This principle is reinforced by Article 4 of the Inter-American Democratic Charter, which recognizes freedom of expression and the press as essential components of democracy.

Although not a binding legal document, the OAS has issued a Declaration of Principles on Freedom of Expression, which further elaborates on content moderation, advocating for minimal restrictions on freedom of expression across all forms, whether social, political, artistic, or otherwise. Key principles include:

  1. Universal Accessibility: Ensuring all individuals can access information without discrimination, promoting inclusivity and diversity in the public discourse (Principle 2).
  2. Protection Against Discrimination: Highlighting the necessity of equitable access to information as fundamental to fostering vibrant and inclusive democratic societies.
  3. Right to Information: Affirming the right to access personal and public information is crucial for transparency, accountability, and informed citizen participation (Principles 3 and 4).
  4. Restrictions on Censorship: Strongly opposing prior censorship and undue pressures from state or economic powers that could restrict journalistic freedoms or the broader dissemination of information (Principles 5 and 9).
  5. Protection of Journalistic Sources: Safeguarding the confidentiality of journalistic sources to maintain the integrity of investigative journalism and trust in media processes (Principle 8).
  6. Opposition to Desacato Laws: Critiquing laws that penalize criticism of public officials, arguing that such laws are antithetical to democratic values and freedom of expression (Principle 11).
  7. Prohibition of Media Monopolies: Advocating against monopolies or oligopolies in media ownership ensures a diverse and pluralistic media landscape, essential for a healthy democratic debate (Principle 12).

Through these frameworks and principles, the OAS promotes a comprehensive approach to content moderation that values transparency, equality, and minimal restrictions on expression. It also ensures that limitations are justifiable and narrowly defined to protect other societal interests.

Additionally, as one of the leading institutions of the OAS, the Inter-American Court of Human Rights (IACHR) significantly influences content moderation policies in Latin America through its decisions, notably in cases like Moya Chacón v. Costa Rica and Álvarez Ramos v. Venezuela. In Moya Chacón v. Costa Rica, the Court criticized using civil sanctions for incorrect information publication by journalists, highlighting that such penalties are disproportionate and inhibit journalism's role in democracy. Conversely, in Álvarez Ramos v. Venezuela, the Court condemned the use of criminal defamation laws against journalists for critiquing public administration, noting that these laws suppress essential public debate and act as a form of indirect censorship. In both cases, the Court emphasized that any restriction on freedom of expression should meet strict standards of legality, legitimate aim, and necessity, ensuring they do not function as indirect censorship, underscoring the Court's commitment to protecting freedom of expression within the region.

These decisions from the IACHR may help shape content moderation policies to better align with international human rights standards. They also provide a framework for governments and regulatory bodies to craft policies protecting free expression while managing the potential harms of content dissemination.

General overview of LATAM countries’ regulation

Content moderation regulation in Latin American countries is limited. Across the region, countries often lack a comprehensive framework for addressing the challenges posed by digital content, including issues related to misinformation, hate speech, and harmful content. One of the primary areas for improvement is the need for more substantive dialogue among key stakeholders regarding content moderation. Compared to regions like the US or the EU, where robust debates and consultations between governments, technology companies, civil society groups, and academics shape regulatory approaches, Latin America often lacks such engagement.

As a result, regulatory responses to digital content remain fragmented and reactive rather than proactive. The absence of clear and consistent policy frameworks worsens the regulatory gap.

The increase of digital content platforms and the rapid evolution of online discourse worsen the regulatory challenges in the region. However, some countries are starting to address the issue with primary discussion, leading the debate on content moderation and digital regulation. Brazil is the regional leader in this discussion, followed—although through a very incipient discussion—by Colombia and Mexico.

Brazil

In Brazil, a country recently marked by political polarization, content moderation policies are often subject to political pressure and ideological perspectives. The government's stance on freedom of expression, hate speech, and disinformation can impact regulatory decisions and enforcement actions. For example, in recent years, Brazilian authorities have faced challenges balancing the need to combat misinformation and hate speech with the protection of free speech rights.

In Brazil, the framework for content moderation is deeply rooted in both the Marco Civil da Internet and the evolving jurisprudence of the Brazilian Supreme Court. The Marco Civil establishes a legal foundation that shapes content moderation approaches, focusing on maintaining the balance between freedom of expression and the responsible use of digital platforms. Article 19 outlines the responsibilities of internet application providers, such as social media platforms. It enshrines the principle of network neutrality in Article 9, which prohibits providers from blocking, monitoring, filtering, or analyzing content and traffic on the Internet under normal circumstances.

The Brazilian Supreme Court's jurisprudence also plays a crucial role in shaping content moderation practices. Through various cases, the courts have navigated the complexities of digital communication, often placing these issues within the broader framework of constitutional rights and societal needs. For instance, in Jean Wyllys v. Carlos Bolsonaro and Eduardo Bolsonaro (2023), the courts mandated the removal of defamatory social media content, underscoring that freedom of expression does not absolve individuals from the responsibility of avoiding content that can harm another's reputation and safety. The challenge of managing fake news and misinformation has also been addressed in the Brazil Fake News Inquiry (2020), where the Supreme Court took decisive action against the organized spread of misinformation that threatened public institutions. This case reflects the judiciary's role in curbing harmful content that undermines democratic values and public order.

The landscape of content moderation in Brazil is poised for significant changes with the introduction of two key legislative proposals: Proposal 592(2023) and Proposal 2630 (2020), also called the “Brazilian Fake News Law.” Each of these proposals proposes reforms designed to reshape how social media platforms manage user content and profiles, enhancing transparency, accountability, and the protection of digital identities.

Proposal 592 (2023) introduces strict criteria for content removal or account suspension, allowing actions only for "just cause," such as non-payment, impersonation, or intellectual property infringement. It aims to prevent arbitrary decisions by platforms and protect users' freedom of expression. The proposal mandates increased transparency, requiring platforms to notify users with clear reasons and appeal processes before taking moderation actions. It also emphasizes the protection of "digital personality" by banning anonymous accounts and aligning digital identities with physical ones for legal recognition. Furthermore, it limits platform moderation powers, demanding clear, non-discriminatory policies and legally justified actions.

On the other hand, the Brazilian Fake News Law seeks to introduce comprehensive regulations for enhancing transparency and accountability on the Internet. Key provisions include the requirement for messaging services like WhatsApp to store records of mass forwarded messages for three months. This measure aims to trace the origins of mass communications and combat misinformation, addressing concerns over the spread of fake news and its impact on public discourse. The law also bans using external tools for mass messaging that are not part of standardized technological protocols, targeting the misuse of such tools for spam and misinformation dissemination.

Further, it mandates that internet application providers ensure users' rights to information and expression when applying their terms of use. This includes establishing mechanisms for users to contest moderation decisions and creating a more robust framework for user interaction with platforms. The requirement for providers to publish quarterly transparency reports detailing their content moderation actions and the criteria used stands out as a significant measure for enhancing the transparency of these processes. Such reports are intended to hold platforms accountable to the public, increasing scrutiny over their moderation practices. Additionally, the law stipulates that public officials' accounts on social media must be open and accessible, thereby promoting transparency in public communications and potentially leading to a more informed electorate.

If adopted, these proposals would create a more regulated environment for content moderation in Brazil, significantly altering the landscape by strengthening user protections against arbitrary content removal and account suspensions. They would align Brazil's digital policy more closely with global trends towards greater transparency and accountability in digital communications, ensuring that users' rights are adequately protected while maintaining the balance between freedom of expression and the prevention of harm.

Colombia

Colombia's socio-political context, characterized by a history of armed conflict, social inequality, and political conflict, shapes content moderation strategies. The government's efforts to address issues such as hate speech, violence, and disinformation are influenced by ongoing challenges related to peacebuilding, reconciliation, and democratic governance. Content moderation policies navigate complex socio-political dynamics, including tensions between different ethnic and social groups and between government authorities and civil society organizations. Colombia's diverse media landscape and online activism also contribute to a complex regulatory environment where content moderation decisions can have significant social and political repercussions.

Colombia’s Law 1450 of 2011 protects freedom of expression online by safeguarding net neutrality. ISPs are barred from censoring or restricting access to specific content, ensuring users have unfettered access to diverse viewpoints and information. The goal is to promote a vibrant marketplace of ideas and bolster democratic discourse by facilitating the exchange of diverse perspectives.

However, a recent case, Sentence SC-52382019 by the Colombian Supreme Court's Civil Chamber (CSJ Civil Chamber), sheds light on another aspect of content moderation in Colombia. The case centered on the civil liability of blog operators for defamatory content posted by users. The Court ruled that the government can hold blog operators liable if they lack procedures to moderate comments, highlighting their responsibility to protect the reputation of others. This decision emphasizes the need to balance freedom of expression with the right to safeguard reputations online, placing the onus of content moderation on some online platforms.

Another essential contributor to Colombia’s digital regulation is UNESCO's Social Media 4 Peace (SM4P) initiative, which included Colombia in 2022, taking a distinct approach to content moderation for the South American country. Unlike traditional methods that focus on removing or censoring content, SM4P empowers young people to become responsible actors in the digital space. This approach aims to create a more peaceful online environment by fostering critical thinking, media literacy, and positive content creation.

A core aspect of the program is media literacy training that equips young people to critically evaluate information they encounter online. They learn to identify misinformation, hate speech, and other harmful content that can incite violence or disrupt peace. According to UNESCO, recognizing these red flags makes young people less susceptible to manipulation and avoids spreading negativity online.

Moreover, the program trains young people to effectively report harmful content to platforms, including identifying the type of content that violates platform policies, understanding reporting mechanisms, and learning how to report responsibly to avoid censorship of legitimate expression. UNESCO aims to equip young people with these skills; SM4P empowers them to flag problematic content that disrupts peace online while maintaining freedom of expression.

The impact of SM4P goes beyond individual action. The initiative can spark conversations about responsible online behavior and the importance of content moderation for a healthy online environment. However, it's important to acknowledge some limitations. SM4P primarily focuses on empowering youth and doesn't directly address the technical aspects of content moderation employed by platforms, like automated filtering or human review processes. Additionally, the long-term impact depends on the program's reach and ongoing engagement with young people in Colombia.

Mexico

In Mexico, a country facing persistent challenges related to corruption, violence, and impunity, content moderation practices and decisions are influenced by concerns about public safety, political stability, and democratic governance. The government's response to online harassment, organized crime content, and political disinformation reflects broader socio-political dynamics, including efforts to combat crime and protect freedom of expression. However, Mexico's regulatory approach to content moderation is also shaped by concerns about censorship, government overreach, and threats to press freedom. As a result, digital platforms operating in Mexico must navigate a complex regulatory landscape while balancing competing demands from government authorities, civil society groups, and users.

In 2021, Senator Monreal proposed reforming Mexico's Ley Federal de Telecomunicaciones y Radiodifusión, or Federal Telecommunications and Broadcasting Law, to address shortcomings and update regulations to suit the evolving digital landscape better. The proposed reform included several key provisions to enhance internet regulation and content moderation practices within the country.

One aspect of the proposed reform focused on strengthening mechanisms for combating online harms such as hate speech, misinformation, and cyberbullying. This involved introducing stricter requirements for IPs and online platforms to promptly monitor and remove illegal or harmful content from their platforms. The reform also aimed to improve transparency and accountability in content moderation processes by establishing more explicit guidelines and oversight mechanisms. It sought to bolster users' privacy and data security protections in line with international best practices, including provisions for stricter data protection standards, user consent requirements, and measures to prevent unauthorized access or misuse of personal information by online platforms.

However, the proposed reform did not pass due to the opposition from stakeholders within the telecommunications and broadcasting industries, who viewed the proposed changes as overly burdensome or restrictive. Consequently, Mexico's Ley Federal de Telecomunicaciones y Radiodifusión has remained the same regarding content moderation since 2014. In particular, Article 145 of this law outlines provisions related to ISPs' responsibilities regarding free speech. It prevents ISPs from monitoring, filtering, and removing any type of content under a “non-discrimination” principle. The law does not directly address content moderation measures for harmful content, but it offers some protections for freedom of speech.

Another critical aspect in the Mexican digital landscape is that in November 2023, the Chamber of Deputies approved reforming the Ley Federal para Prevenir y Eliminar la Discriminación (Federal Law to prevent and Eliminate Discrimination) to avoid hate speech and discrimination on social media platforms. The reforms seek to regulate and sanction individuals or entities that engage in such practices to foster a more inclusive and respectful online environment. The measures include provisions for platforms to promptly establish mechanisms for reporting and removing hate speech content. Additionally, the reforms aim to promote education and awareness to prevent the dissemination of discriminatory content.

Conclusion

The landscape of content moderation in Latin America remains fragmented and varied, reflecting the region's countries' diverse political, cultural, and legal contexts. While some nations have made strides in developing comprehensive legal frameworks for digital regulation, many others lag in addressing the challenges of online content. Despite these challenges, there is a growing recognition of the need for more digital policies within the region.

The OAS and the IACHR play pivotal roles in shaping content moderation policies across the region. The OAS advocates for freedom of expression through legal instruments like the American Convention on Human Rights and promotes transparency and minimal restrictions on expression. The IACHR's rulings promote that any restrictions on expression meet stringent legal standards to prevent indirect censorship. Despite these efforts, there has yet to be a common framework for content moderation in Latin America, and regional initiatives like the eLAC Digital Agenda are still in the early stages regarding this issue.

A comparative analysis of Mexico, Colombia, and Brazil highlights the varying approaches to content moderation in Latin America. Brazil leads with evolving legal structures, including the Marco Civil da Internet and new legislative proposals. Mexico, while having foundational laws, is debating the adaptation of its regulatory approach to address online harms, and Colombia, influenced by its socio-political challenges, emphasizes net neutrality and educational initiatives to promote responsible digital behavior.

In addition to legal frameworks, the effectiveness of content moderation in Latin America is also influenced by technological infrastructure and resources available to enforcement agencies. Disparities in internet access and digital literacy across the region further complicate efforts to regulate online content uniformly. While urban centers may boast robust connectivity and skilled personnel, rural and marginalized communities often need more access to information and mechanisms for reporting harmful content.

The transnational nature of online content poses unique challenges for regulation in Latin America. Social media platforms, for instance, operate across borders, making it difficult for individual countries to enforce their content standards effectively. This underscores the importance of international cooperation and multilateral agreements in addressing issues such as hate speech, misinformation, and online harassment.

Authors

Mateo García Silva
Mateo García Silva is a student fellow at Tech Policy Press and a Tech & Public Policy Fellow at the Georgetown University McCourt School of Public Policy.
Maria Fernanda Chanduvi
Maria Fernanda Chanduvi is a student fellow at Tech Policy Press and a Master's candidate at the Georgetown University McCourt School of Public Policy. Maria holds a J.D. from the Pontificia Universidad Católica del Perú and is an M.A. candidate for the Communication, Culture, and Technology Program...

Topics