Home

Oversight Board: Meta Should Review the COVID-19 Claims it Removes and Improve Transparency

Renée DiResta, John Perrino / Apr 27, 2023

John Perrino is a policy analyst and Renée DiResta is the Research Manager at the Stanford Internet Observatory.

The debate over how to identify and address false or misleading information about COVID-19 continues to be an important and contentious issue. Last year, Meta asked the Oversight Board to review whether the company should change its current COVID-19 misinformation policies for removing claims that are considered by medical authorities to be false and harmful. Meta specifically requested advice on whether to label or demote certain content, as the public health situation has changed in much of the world.

In response, last week the Oversight Board — the quasi-independent entity funded by Meta to guide its content moderation decisions and policy — released an advisory report calling for continued enforcement, but a reassessment of the types of claims Meta should remove under its current COVID-19 misinformation policy.

The Board recommends a review of the 80 medical claims currently flagged for removal under Meta’s COVID-19 policy to evaluate their continued potential for harm. The Board also recommends Meta prioritize increased user and public transparency for moderation policies and enforcement decisions; support independent research through data access and engagement with outside teams; and conduct risk assessments on design features that might amplify harmful information.

The non-binding opinion calls on Meta to prioritize addressing medical information that is likely to result in serious injury or death, while conspicuously noting that the Board attempted “to reconcile competing viewpoints from stakeholders and Board Members” by suggesting a localized approach. Meta said that such an approach was infeasible. This may be because something that is classified as a “harm” under a public health policy should theoretically be harmful to any human being; however, the gap between Meta and the Oversight Board on this issue highlights prior debate about whether a global approach to content moderation is simply unworkable. Indeed, the full scope of stakeholder concerns grouped by region is worth reading — the equities that different regions prioritized varied greatly, and concern about sociological aspects of “harm” were not uniform.

Moderation is not a binary “leave it up or take it down” process, and “harm” is a term that can mean many things. False claims promoting a medical treatment that can result in substantial injury or death, for example, should be prioritized and subject to removal, while claims that may be controversial or disputed, but unlikely to result in physical harm, can and should remain on the platform with labels that provide context or more information for users. The highly politicized nature of the pandemic in one of Meta’s largest markets — and the politicization of content moderation more broadly — necessitates more commitment to transparency efforts as well, particularly when requests for content takedowns are coming from governments.

The Oversight Board’s recommendations guide Meta in the right direction: towards protecting against demonstrable harms while respecting free expression on contentious issues. That said, this is a nuanced topic, and questions about the definition of “harm” and the best mechanism for moderation require further work by Meta, and further study by the Oversight Board. Independent research is needed to address Meta’s reported internal assessment that “there is no evidence that neutral labels are effective,” and to measure and better understand enforcement.

Meta’s COVID-19 and Vaccine Misinformation Policies

As the COVID-19 pandemic began in early 2020, platforms broadened health and vaccine information policies in an effort to minimize potential harms from false or misleading health claims or advice. Meta’s “Misinformation about health during public health emergencies” policies evolved over time, becoming more granular and proactive in an effort to address misleading narratives about vaccines, treatments, and the disease itself during the pandemic. In December 2020, Facebook enacted a new policy to remove “false claims… debunked by public health experts.” That list of false claims was expanded in February 2021 and currently covers 80 false claims — including that COVID-19 vaccines cause magnetism, contain microchips, bestow the ‘mark of the beast,’ or that hospitals kill COVID-19 patients to get more money or sell people’s organs.

Some claims have been removed from Meta’s policy over time, such as those regarding the origins of the virus. Claims about a possible lab leak remain in dispute — debate is ongoing, though some government agencies and other authorities publicly regard it as the likely origin for the virus. However, whether the virus is of zoonotic origin has little impact on public health — it is a stretch to argue that moderating this debate meaningfully mitigated a demonstrable harm of the virus itself. On the other hand, misleading advice to treat COVID-19 with Ivermectin can result in serious health complications or death; such claims are still included in the policy. Platform policies should prioritize action on direct threats that risk serious injury or death, while remaining cautious not to overstep on discourse that does not result in immediate harm.

Additionally, “good” information can change quickly; at various times during the COVID-19 pandemic, the public was searching for information about something that was not yet knowable. In a time of evolving scientific consensus it was demonstrably difficult for platforms to determine whether content posed a significant risk to public health — not least because health institutions were often reticent to communicate as they worked to arrive at solid scientific findings. As DiResta argued in The Atlantic in May 2020, “The paradox, however, is that the WHO, the CDC, and other leading health institutions—experts in real-world virality—have failed to adapt to the way information now circulates. Agencies accustomed to writing press releases and fact sheets for consumption by professional reporters are unequipped to produce the style and speed of information that the social platforms have made routine, and that the public has come to expect.”

In acknowledging this difficult balance, the Board’s opinion recommends maintaining the COVID-19 policy, but re-examining the specific claims currently designated for removal. One challenge for Meta has been how to assess these various claims — while it employs some public health experts and scientists, it primarily relies on national and multinational public health authorities such as the CDC and the World Health Organization to determine which information falls under its policy as false and likely to cause physical harm. At times, this has been the source of contention. The World Health Organization, for example, was embroiled in some controversy about whether or not it had politicized aspects of the pandemic early on in response to pressure from China. To incorporate a “broader set of perspectives” relevant to evaluating the “exigencies of the situation,” the Board recommends that the review of policies draw upon a wider field of “public health experts, immunologists, virologists, infectious disease researchers, misinformation and disinformation researchers, tech policy experts, human rights organizations, factcheckers, and freedom of expression experts.”

Focus on Public Transparency and Research

One additional challenge that the Oversight Board addresses in its opinion is that there are often gaps between the implementation and the enforcement of policies, especially when posts fall into a borderline policy area. Some content about vaccines, for example, employs a mix of personalized stories alongside decontextualized statistics; an isolated event that may be true can be exaggerated in prevalence to create a misleading picture of vaccine safety overall. Recognizing this, some personal stories are covered by the current policy for removal if they are “shocking or hyperbolic” in a way that may discourage the use of vaccines. However, these anecdotes are difficult for platforms to address, and their removal is likely to make users feel like they are being silenced. Inconsistency in enforcement, perceived overreliance on takedowns, and the politicization of moderation have resulted in a murky understanding of how COVID-19 moderation policy is applied.

The Board issued 18 recommendations that prioritize user and public transparency about policies, labels for posts, and reports on government requests to remove content under the policy to address concerns of government crackdowns on free speech under the guise of a public health emergency. The Board also emphasized the importance of a global approach and independent review of platform data to analyze for consistent policy enforcement. And, it recommended risk assessments for design features such as recommendation algorithms and content feeds. The reviews would be outlined in public reports on how misleading and potentially harmful content spreads online.

Social media users should feel confident that policies are enforced consistently and transparently. There is an important balance for platforms to maintain when crafting policy to minimize misinformation while promoting free expression, including that of people sharing experiences. Data that would make it possible to understand platform actions around content type, or enforcement type, are rarely made public.

Meta is correct to acknowledge that public health content moderation decisions should be regularly re-assessed as the world changes. Ultimately, removal should be a last resort in content moderation. Transparency won’t be enough for everyone, but labels and the disclosure of any limitations on content can provide context while allowing discourse during times of uncertainty.

Transparency can help hold social media companies accountable for improving understanding and delivering fair, consistent policy enforcement. In order to improve trust and safety, we hope Meta adopts these recommendations for increased public transparency for users and researchers.

Renée DiResta serves on the Tech Policy Press board.

Authors

Renée DiResta
Renée DiResta is the Research Manager at the Stanford Internet Observatory. She investigates the spread of malign narratives across social networks, and assists policymakers in understanding and responding to the problem. She has advised Congress, the State Department, and other academic, civic, and...
John Perrino
John Perrino is a policy analyst at the Stanford Internet Observatory where he translates research and builds policy engagement around online trust, safety and security issues.

Topics