Home

Regulating Online Platforms Beyond the Marco Civil in Brazil: The Controversial "Fake News Bill"

Joan Barata / May 23, 2023

Joan Barata is a Senior Fellow at Justitia's Future Free Speech project, and is also a Fellow of the Program on Platform Regulation at the Stanford Cyber Policy Center.

Campo Grande, MS, Brazil - November 6, 2022: Brazilian protesters on the streets asking for federal intervention after Lula's election. Vinicius Bacarin/Shutterstock

Introduction

Brazil’s President, Luiz Inácio Lula da Silva, has referred a proposed law to the Congress. Colloquially referred to as the “Fake News Bill,” the draft legislation originates in a proposal made by Senator Viera in 2020 and is aimed at regulating online platforms and instant messaging services in the country. The proposed legislation has been under discussion in recent years, but enjoys a new political urgency under the new presidency.

Unfortunately, the Bill threatens to undo many of the rights-protective innovations of Brazil’s most important internet law, the 2014 Marco Civil da Internet. In its stead, the new law would severely limit the scope of the principle of intermediary liability exemption, enable the application of very strict yet vaguely-defined crisis protocols, impose risk assessment and mitigation obligations without sufficient safeguards against arbitrariness and excessive impact on human rights, as well as broadly criminalize the dissemination of “untrue facts” in violation of existing human rights standards, among other issues.

Debates around the Bill are taking place in a context of political polarization and an expected Supreme Court decision in a series of cases where the Court has accepted to assess the constitutionality of article 19 of the Marco Civil da Internet. At stake in these cases is the most important provision in Brazilian legislation granting a conditioned immunity to internet intermediaries. The Bill has also been accompanied by the adoption of specific regulations that already appear to introduce certain carve outs to the general regime incorporated into the Marco Civil, such as the Decision of the Ministry of Justice and Public Security on the “prevention of the dissemination of flagrantly illicit or harmful content by social media platforms” (Decision 351/2023, of 12 April 2023).

There are some key provisions that deserve some consideration:

1. Criminalization of a broadly defined class of “untrue facts”, which violates freedom of expression international standards and puts in the State's hands the possibility of persecuting political speech.

2. Crisis (“security”) protocols, in which platforms must obey a non-identified administrative authority regarding content moderation decisions in one specific or several different areas (due diligence), and risk loss of immunities if they do not comply.

3. Due diligence regime in the form of risk assessment and mitigation obligations for platforms, superficially similar to those in the European legislation on online platforms, but lacking some of its constraints and raising similar concerns regarding legal certainty and impact on human rights.

4. A notice and action framework in which platforms must take as accurate all allegations by users that online content is illegal.

5. A remarkably broad must-carry obligation for an as-yet-undefined class of “journalistic” content and content posted by “public interest” accounts, meaning essentially government accounts.

Article 13 of the American Convention contains broad protection of the right to freedom of expression and a few clear indications about the State's obligations in this area, including not only negative requirements not to interfere in individuals’ rights, but also possible venues for positive action from authorities to effectively protect the exercise of such right. Among these protections, it is important to note here the responsibility of States to prevent restrictions imposed“by indirect methods or means,” including limitations enforced or applied by private intermediaries based on obligations introduced via statutory regulation.

The Organization of American States (OAS) Special Rapporteur on Freedom of Expression published in 2013 a Report on “Freedom of Expression and the Internet”, establishing a series of very relevant and specific standards in this area. As for the responsibility of intermediaries, the Report affirms, above all, that it is conceptually and practically impossible “to assert that intermediaries have the legal duty to review all of the content that flows through their conduits or to reasonably assume, in all cases, that it is within their control to prevent the potential harm a third party could cause by using their services” (par. 96).

Regarding the role and capacities of intermediaries to assess the legality of a piece of content, it is important to note how the Special Rapporteur warns States about the fact that:

“...intermediaries do not have—and are not required to have—the operational/technical capacity to review content for which they are not responsible. Nor do they have—and nor are they required to have—the legal knowledge necessary to identify the cases in which specific content could effectively produce an unlawful harm that must be prevented. Even if they had the requisite number of operators and attorneys to perform such an undertaking, as private actors, intermediaries are not necessarily going to consider the value of freedom of expression when making decisions about third-party produced content for which they may be held liable” (par. 99).

For these reasons, and in the same sense, in a Report presented in 2011 the UN Special Rapporteur clearly emphasized that “[h]olding intermediaries liable for the content disseminated or created by their users severely undermines the enjoyment of the right to freedom of opinion and expression, because it leads to self-protective and over-broad private censorship, often without transparency and the due process of the law.” In other words, international human rights standards enshrine a general principle of intermediary liability exemption to avoid the imposition of private legal adjudication obligations and the creation of severe risks for freedom of expression.

The OAS Special Rapporteur has also had the opportunity to refer to the so-called “fault-based liability regimes, in which liability is based on compliance with extra-judicial mechanisms such as notice and takedown.” The Rapporteur particularly warns about the fact that “in general (…) this type of mechanism puts private intermediaries in the position of having to make decisions about the lawfulness or unlawfulness of the content, and for the reasons explained above, create incentives for private censorship” (par. 105). More particularly:

“the requirement that intermediaries remove content, as a condition of exemption from liability for an unlawful expression, could be imposed only when ordered by a court or similar authority that operates with sufficient safeguards for independence, autonomy, and impartiality, and that has the capacity to evaluate the rights at stake and offer the necessary assurances to the user” (par. 106)

The Rapporteur particularly insists on the fact that “leaving the removal decisions to the discretion of private actors who lack the ability to weigh rights and to interpret the law in accordance with freedom of speech and other human rights standards can seriously endanger the right to freedom of expression guaranteed by the Convention” (par. 105). A different and fairer model would of course be one where legal adjudications remain in the hands of the judiciary and independent authorities, and intermediaries become responsible and liable only when required to act by the former (which is, by the way, the model of conditioned liability also encompassed so far by the Marco Civil).

Last but not least, it is important to note that comparative models have had some influence on the drafters of the proposal. The language of certain sections of the recently adopted Digital Services Act (DSA) is recognizable across the text, although this does not necessarily mean that the proposal in question is based on the same principles and safeguards that inspired the legislation in the European Union.

Relevant aspects of the proposal in light of human rights standards and best comparative practices

1. Intermediary liability

The Bill includes a series of provisions that literally eliminate the very notion of intermediary liability exemption, or even any form of conditioned immunity, by introducing a relatively wide area of strict liability. In other words, platforms could be held liable for user speech they never knew about or had an opportunity to respond to. This would not only contradict the regional standards mentioned above, but also disfavor online free expression much more than the European model that appears to have inspired its provisions.

The DSA does not repeal the basic provisions established under longstanding EU law, particularly the principle that intermediaries are immunized from liability for content they do not know about. At the same time, the DSA does incorporate new important rights for users and obligations for service providers (particularly the so-called very large online platforms: VLOPs) in areas such as terms and conditions, transparency requirements, statements of reasons in cases of content removals, complaint-handling systems, and out-of-court dispute settlements, among others.

Therefore, it is important to note that the DSA model maintains the principle of conditioned liability exemption as a basic precondition to avoid the violation of the users’ right to freedom of expression via delegated private regulatory powers, while, at the same time, holding online platforms to the responsibility of fulfilling important obligations in the mentioned areas. This responsibility is articulated through a series of regulatory supervision mechanisms and subsequent possible administrative penalties.

The Bill is based on a different approach, with strict liability provisions (i.e., platforms become liable ex lege, without any other prior step or assessment) in two cases (article 6):

a) Damages caused by third-party advertising content. It is important to note that the text does not seem to refer only to responsibility regarding possible violation of general advertising rules by a third party, but also and particularly, to damages generally caused by the content in question. This provision (which only refers to advertising, not organic content) may trigger serious issues of interpretation and enforcement. It also establishes a liability regime that appears to eliminate the immediate responsibility of providers of products and services vis-à-vis consumers. A more balanced regime would better distribute obligations and liability between all the involved parties and thus avoid disincentivizing the presence of advertising on online platforms. It needs to be noted that according to articles 26-30, online platforms already hold a significant number of obligations, and therefore, responsibilities, regarding digital advertising.

b) Damages arising from content generated by third parties deriving from non-compliance with the duty of care obligations established during the duration of the so-called “security protocol”.This strict liability provision is therefore connected to the cases where an online platform is subjected, via non-specified regulatory instruments, to a non-defined protocol where a non-identified administrative authority will be able to dictate all the content moderation decisions in one specific or several different areas. Such protocol will be justified in cases where broadly defined risks are “imminent” or the actions of the platform are insufficient or negligent. Nothing is specified by the legislator when it comes to the assessment of such imminence, insufficiency, or negligence. Nothing is specified either regarding the scope of such intervention or any possible appeal or other safeguards to prevent an excessive intrusion into users’ rights, particularly freedom of expression. This means that in such poorly defined cases, platforms are forced to obey an administrative authority in any determination regarding content moderation (including issues beyond legality) or otherwise face strict and inexcusable liability for any damage caused by third-party content.

It is important to acknowledge that the DSA has incorporated, in its article 37, the possibility of implementation of exceptional protocols in situations of a crisis of extraordinary nature affecting public security or public health. These situations may also represent the imposition of certain restrictions and limitations to intermediaries’ capacities to moderate content. However, the DSA’s crisis protocols, particularly in terms of criteria and characteristics, are defined with more detail and legal certainty, and particular safeguards to avoid excessive or unjustified interventions are contemplated. In addition to this, a co-regulatory approach is incorporated in the sense of involving all relevant stakeholders in the process of the drawing up, testing and application of the measures in question. And above all, a lack of proper enforcement by platforms might trigger administrative responsibilities but does not affect intermediary liability exemptions.

This provision is established in the Bill as a purported exceptional regime vis-à-vis a series of systemic risk assessment and risk mitigation obligations, established under Section II. These provisions are quite similar to the regime included in the DSA. However, further analysis is necessary here:

a) The systemic risk assessment and mitigation regime included in the DSA has not been implemented or tested so far. There is thus a complete lack of specific monitoring criteria from a regulatory point of view, as well as of internal or co-regulatory standards from the point of view of online platforms.

b) The consideration of these matters by the DSA suggests that the correct regulatory oversight of risk assessment and mitigation can only be effectively undertaken in cases where platforms have already analyzed and deployed documented efforts in this area. It is important to note that risks might manifest in different manners, in different platforms, and even vis-à-vis different types of users. In other words, while the European style ex-post assessment of mitigation policies already raises many unknown methodological and substantive questions from a regulatory perspective, the ex-ante definition of the platform’s policies by an external body in order to deal with vaguely defined risks clearly opens the door to unaccountable arbitrariness. Moreover, the latter can seriously harm the freedom of expression of users by pushing State’s indirect speech control powers far beyond the legal/illegal dichotomy.

c) The DSA system of risk assessment and mitigation has been the object of criticism from a free speech perspective:

c(1) Provisions that connect risks with the distribution of illegal content put the burden on platforms to make very complex determinations regarding the actual interpretation of the law and the scope of fundamental rights such as freedom of expression. This is particularly important bearing in mind the complexity of the possible areas of illegal content mentioned in article 11. What kind of sophisticated content moderation teams and measures might minimally guarantee reasonable monitoring and prevention of the wide array of crimes referred to in this provision? Even if providers will not be judged based only on the treatment of “isolated cases,” how could an overall strategy against illegal content be fairly assessed?

c(2) When it comes to legal-but-harmful risk categories, it needs to be noted that political, economic, and social life incorporates per se many dysfunctions and risks within the context of modern societies. These problems, beyond illegal behaviors, exist in parallel with or independently from online platforms. The key element here is to properly assess to what extent intermediaries generate “extra risks” or increase the existing ones up to an “unacceptable” level. The next big question is whether platforms can be put in the position of making such a complex analysis and deciding the best tools to deal with the mentioned negative effects. The nonexistence of clear procedures and criteria pre-determined by the law (or to be determined by an independent body) generates direct risks for freedom of expression, particularly within the context of a regime that only focuses on harms but does not incorporate the need for any human rights impact assessment to guarantee a balanced approach.

2. Notice and action mechanism

Article 16 of the Bill regulates users’ notice-and-action procedures and legal consequences. In order for a notice to be “valid”, it shall fulfill a series of requirements that are not mentioned by the legislator, who chooses to refer them to a posterior regulatory instrument.

However, the text does contain a very straightforward and consequential determination: valid notices necessarily create knowledge in providers regarding the infringing nature of the reported piece of content. No room for diverging interpretations or legal considerations. No possible appeals or revisions. This gives third parties (not everyone though, only other users of the same service) a disproportionate and unaccountable power to restrict speech, subjected only to the fulfillment of a series of still-to-be-determined formalities.

3. Content moderation vs content curation

The proposal embraces a wide notion of content moderation (article 5), which seems to incorporate not only content policies aiming at preserving certain values and principles and preventing abuse (i.e., terms of service and enforcement), but also other decisions usually taken by online platforms regarding the way content is presented and offered to users, based also on their own preferences (ranking and recommendation). Articles 17, 18, and 19 appear to subject all these decisions to a series of broad requirements in terms of implementation, communication, review, and publication.

This approach represents a disproportionate and technically unfeasible burden on platforms which does not necessarily serve the purposes and objectives of the legal proposal. A more balanced approach would be necessary to differentiate various types of obligations and possible actions by users based on the specific nature of the providers’ decisions, their impact on rights and freedoms and conditions of enjoyment of the service, as well as users’ involvement when it comes to defining their own online experience.

4. Removal of journalistic content

Chapter VII of the Bill contains specific provisions regarding the treatment of so-called journalistic content by online platforms. Article 33.6 seems to incorporate a very broadly defined must-carry rule for this type of (essentially undefined) content, except for the cases covered by the law or according to a court order. This provision reads as some sort of self-referential loophole that might trigger, in its current form, problems of interpretation and enforcement. The incorporation of such obligation may also force platforms to keep pieces of content that might seriously violate their terms of service (for example, certain types of disinformation or use of derogatory terms) inasmuch as they have been published by “traditional” media entities and do not contradict national legislation. This also means providing a privileged treatment to a piece of content based only on the user that posted it and independently from the public interest of the publication. Such a discriminatory and unjustified approach may not be compatible with the way freedom of expression and freedom of information is protected as a universal right, in a context where the traditional notions of media and journalism have become clearly outdated.

5. Public interest social media accounts

Article 33 incorporates another must-carry obligation for providers vis-à-vis accounts considered of “public interest” (i.e., accounts connected to individuals holding public positions, and acting in their institutional capacity). Platforms are prevented from articulating “illicit or abusive” interventions regarding these accounts, although this prohibition should in fact apply, in such a broad formulation, to the relationship between an online platform and its users. In addition to this, in cases of cancellation of such accounts, the proposal establishes that the judiciary shall order the restoration in 24 hours whenever there is proof that the account was managed based on the principles of “legality, impersonality, morality, publicity, and efficiency” (in the language of an informal English translation).

These very general moral principles do not provide much clarity regarding the very sensitive dilemmas that moderation of accounts of public figures usually triggers and put on the shoulders of the judiciary (a power that strictly decides on the basis of a positive law) the responsibility of adopting a determination that goes beyond a mere matter of legal enforcement and interpretation.

6. Criminalization of fake news

Last but not least, the proposal contains an extremely problematic provision, from a human rights perspective, in article 50, which criminalizes the dissemination of “untrue facts” (sic) when they are “capable of compromising the integrity of the electoral process or that may cause damage to physical integrity and are subject to criminal sanction.” International human rights standards, particularly in the area of freedom of expression, clearly determine that establishing general and broad prohibitions and penalties regarding the dissemination of “false,” “untrue,” and “fake” information constitutes an unacceptable restriction to the fundamental right to freedom of expression.

The dissemination of disinformation, propaganda, and similar types of harmful speech must be addressed via non-repressive mechanisms such as media ethics, communication policies, reinforcement of public service media, or promotion of media pluralism. The provision in question not only violates these general international standards but also contains a very broad language that puts in the State's hands the possibility of persecuting political speech within the context of the most relevant and sensitive process in a democracy, namely an election period.

Conclusion

The Bill aims at improving the pluralism and fairness of the online public sphere by defining platform tools and obligations regarding content moderation. Most of the text adopts an approach similar to the European Union DSA model, based on avoiding direct content regulation and reinforcing the rights of platforms’ users and the public in general. However, in most cases the proposal fails to also incorporate adequate guardrails and safeguards that are absolutely necessary to avoid an excessive impact on the right to freedom of expression, and the unnecessary and undesirable delegation of speech regulatory powers to private hands.

Civil society, political organizations, international bodies, media, and the general public appear to be following the debates around this proposal with interest and intensity. This is the right moment to put forward and discuss the best technical proposals beyond superficial and politicized debates. What is at stake is the enactment of a new piece of legislation adapted to the specific characteristics of a region that does need a proper and proportionate regulation of digital services, but not the Bill as it is currently written.

Authors

Joan Barata
Joan Barata works on freedom of expression, media regulation, and intermediary liability issues. He is a Senior Fellow at Justitia's Future Free Speech project. He is also a Fellow of the Program on Platform Regulation at the Stanford Cyber Policy Center. He has published a large number of articles ...

Topics