Home

Donate
Perspective

Why Platforms Don’t Catch Climate Misinformation — and How to Change That

Alice Hunsberger, Pinal Shah, Theodora Skeadas / Dec 4, 2025

An installation of an iceberg with a burning Facebook logo near the United States Capitol, in protest of climate change misinformation on the social media platform, Nov. 4, 2021. (Eric Kayne/AP Images for SumofUS)

Climate misinformation presents a troubling paradox: while most Americans believe that climate change is real and human-caused, online misinformation continues to erode public trust in climate science. Many platforms lack specific policies addressing climate disinformation, and even those with policies in place struggle to enforce them effectively. Why do platforms fail to adequately address this problem, and what would it take to change?

In mid-October, the Harvard Radcliffe Institute convened ten experts — trust and safety professionals, science communicators and AI researchers — to examine online civic trust related to climate science. The seminar, organized by professor Jodi Schneider of the University of Wisconsin-Madison and Dr. Rod Abhari of Northwestern University, explored how misinformation shapes public understanding of climate change and what mechanisms might rebuild trust in climate science online.

This piece distills perspectives from that session, focusing specifically on the trust and safety dimensions: why platforms struggle to both create and to enforce climate misinformation policies, and what structural changes could address these failures.

A blind spot for trust and safety

Major technology platforms have demonstrated that they can act decisively against harmful content when certain conditions are met. Following the January 6, 2021 insurrection at the United States Capitol, Twitter and Facebook suspended Donald Trump’s accounts within days. During the COVID-19 pandemic, platforms implemented robust medical misinformation policies guided by directives from national and global health authorities. Yet climate misinformation — despite the catastrophic real-world consequences of climate change — remains largely unaddressed.

The problem is structural. Trust and safety teams typically operate within frameworks that triage based on three criteria common to risk management across domains: imminence of harm, likelihood of occurrence, and severity of impact.

Climate misinformation clearly meets the latter two thresholds— the harms are both certain and catastrophic, but the lack of imminence renders these factors moot. Platforms respond to immediate violence or destruction, but not diffuse, long-term harms. Additionally, research shows that the public broadly responds less to protracted (but highly impactful) harms, like famine, than to distinct, newsworthy disasters, like hurricanes and tornadoes. Additionally, there is no authoritative institutional body equivalent to the Center for Disease Control or World Health Organization to provide clear enforcement guidance. Finally, the issue is deeply politicized, creating significant backlash risks that platforms are unwilling to absorb in the current political climate.

TikTok offers an instructive case study. The platform maintains a policy stating that climate misinformation will be removed or suppressed from recommendation algorithms. In practice, enforcement seemingly remains minimal — not due to technical limitations, but because climate content moderation lacks the crisis urgency, institutional backing and political consensus that drive resource allocation within trust and safety organizations.

While some platforms have acted against coordinated inauthentic behavior (which could address bot networks spreading climate denial or foreign influence operations exploiting climate paralysis) these measures address only a narrow subset of the problem. They do little to counter well-funded disinformation from fossil fuel interests, the amplification of organic climate skepticism or the sophisticated greenwashing campaigns that blur the line between advocacy and deception.

The asymmetry problem

The seminar participants identified a fundamental imbalance: fossil fuel industry resources for shaping public opinion vastly exceed funding available for climate communication. Organizations like Climate Nexus have witnessed their digital advocacy funding dry up, while more than 1,600 oil and gas lobbyists attended COP30 (roughly one in every 25 participants).

But the problem is structural, not just financial. Social media engagement often relies on controversy and debate, precisely what climate misinformation generates. Heated arguments keep users on platforms longer, increasing advertising revenue. Oil and gas interests can exploit these incentive structures, while climate communicators lack equivalent resources to compete for algorithmic attention. Even when funding exists for platform accountability work, the institutional incentives to undertake it are largely absent. Philanthropic funding and advocacy campaigns cannot compel platform action in the way that regulatory requirements, advertiser boycotts, legal threats or public or shareholder pressure can. Climate misinformation currently generates insufficient pressure in any category.

This asymmetry manifests in sophisticated tactics documented by researchers. Climate skeptics amplify fringe scientists who receive significantly more engagement than mainstream researchers. These individuals misinterpret legitimate studies to support false claims, reframe content moderation as censorship to position themselves as truth-tellers facing institutional suppression and exploit the speed of social media, where sensational falsehoods spread faster than careful corrections.

The timing compounds the problem. Most major tech companies have publicly scaled back their trust and safety teams, leaving fewer resources to address an expanding problem.

Going forward, we must use multiple pressure points

Seminar participants from the trust and safety sector, who spoke under Chatham House Rule, were candid about structural constraints: speaking up within corporations carries real professional risk, as their work is generally resourced only when profitable and politically convenient — neither of which applies to climate moderation currently.

Rather than pursuing a single solution, seminar participants identified several leverage points that could collectively shift platform behavior:

  • Regulatory action, particularly in the EU: Platforms operate globally and must satisfy the most stringent regulatory environment. European regulations on climate-related claims and content governance create compliance requirements that can have spillover effects in other jurisdictions. The United Kingdom's Digital Markets, Competition and Consumers Act now allows sanctions for greenwashing without litigation — a model that could extend to platform-hosted content.
  • Advertiser accountability: Brands want to know what content appears alongside their advertisements. Organized campaigns highlighting climate misinformation adjacent to brand advertising can create business pressure for platform policy changes. This approach has worked in other contexts — forcing platforms to demonetize certain content categories — and could be adapted for climate.
  • Building alternative models: Rather than focusing exclusively on major platforms, the working group identified smaller, community-oriented platforms as potential partners for developing and demonstrating effective climate misinformation policies. These platforms may have greater flexibility to experiment with governance approaches that could then be adopted more broadly. Success at a smaller scale can establish proof of concept and create competitive pressure on larger platforms.
  • Institutional authority development: The absence of a climate science body equivalent to the CDC or WHO represents both a challenge and an opportunity. While establishing such authority involves political complexity, existing scientific institutions — the National Oceanic and Atmospheric Administration, the Intergovernmental Panel on Climate Change and NASA — could potentially fill this role if their mandates were clarified and their visibility increased, though significant political hurdles make this unlikely for the next half decade.
  • Bridging sectors: The seminar revealed significant gaps between academic research, trust and safety practice and policy advocacy. Academics often lack understanding of platform operational constraints. Trust and safety professionals rarely have access to climate science expertise, creating friction for policy development even when the will exists. Policymakers receive insufficient input from either group. Creating formal coordination mechanisms — regular convenings, shared resources, collaborative research — can begin to address these disconnections.

The stakes

Climate misinformation has real consequences today: "government weather-controlling machines" conspiracies led to threats against FEMA workers during hurricane response, while the delay in climate action leads to rising economic losses (the property insurance crisis and destruction of infrastructure). Any political paralysis driven by manufactured doubt delays the transition to sustainable energy systems that most Americans, according to polling data, actually support.

Current trust and safety frameworks were never designed to address slow-moving, politically contested, institutionally complex challenges like climate misinformation. The question is whether those frameworks can evolve — through regulatory pressure, market incentives, competitive dynamics, or institutional innovation — to meet the moment. The Harvard seminar represents one effort to build the cross-sector coordination and evidence base necessary to make that evolution possible.

For platforms, policymakers, and civil society organizations, the message is clear: climate misinformation persists not just because it is technically difficult to address, but because existing incentive structures don't require it. Changing those structures requires sustained pressure across multiple fronts — but the alternative of allowing continued erosion of public trust in climate science is far more costly.

Authors

Alice Hunsberger
Alice Hunsberger is Head of Trust & Safety at Musubi, an AI solutions provider for Trust & Safety teams. With over 15 years in Trust & Safety, her leadership experience spans the full spectrum from policy development and operations, vendor management and BPO leadership, AI automation and red teaming...
Pinal Shah
Pinal Shah is a policy and responsible innovation strategist at the intersection of trust & safety, AI governance, and society. She serves as Senior Advisor at CAS Strategies and Senior Director at the Data & Trusted AI Alliance, where she leads initiatives on responsible AI deployment. Previously, ...
Theodora Skeadas
Theodora Skeadas is a public policy strategist and thought leader at the forefront of technology ethics, platform governance, and responsible AI. Theodora is Chief of Staff at Humane Intelligence, a nonprofit committed to collaborative red teaming and improving the safety of generative AI systems. S...

Related

Analysis
At COP30 in Brazil, Tech’s Role in the Climate Crisis was a FootnoteNovember 25, 2025
Podcast
Setting a 'Tech Agenda' for Climate WeekSeptember 21, 2025
Perspective
The Overlooked Climate Risks of Artificial IntelligenceJuly 30, 2025

Topics