Home

A Climate of Disinformation: Haters gonna hate, but social media companies force us to listen

Michael Khoo / Sep 27, 2021

As dozens of Texans were dying of hypothermia in a stunning storm this past winter, a Twitter user named @Oilfield_Rando posted a picture of wind turbines being de-iced—blaming the Texas power grid’s failure on green energy.

The image and narrative were quickly amplified by prominent social media accounts and from there got coverage in conservative outlets like Breitbart, The Daily Wire, and Fox News. Pundit Tucker Carlson’s segment alone gave it more than 4.4 million views on Facebook and 600,000 on YouTube at last check. Conservative politicians used the talking point and it became a dominant narrative.

An image went viral, and politicians ate it up. Source

Of course, it’s utterly false. The image was from Sweden. From 2015. But many Americans who rely on social media for their news would never know this. While mainstream outlets went on to debunk the narrative of green energy failure in Texas, our research showed that 99 percent of the false posts on Facebook and other platforms went unchecked—despite the companies’ supposed commitment to fact-checking on climate information.

It’s clear that playing whack-a-mole isn’t working.

The truth is the business model of social media companies incentivizes everything from hate speech to climate disinformation, and needs to be pulled apart if we are ever going to address these harms--which in the case of behemoths such as Facebook and Google are in part due to their monopolistic competitive position. America has antitrust laws for a reason: no company should have this much control over an industry, the flow of information, and ultimately the public discourse.

While addressing competition issues may take years, developing transparency policies to reduce the incentives for disinformation seems like low-hanging fruit in the near term.

Last year, for example, prominent climate deniers on social media like Naomi Seibt went down the QAnon rabbit hole. After her initial high engagement rates in March faded, in May Seibt began including conspiracy content about the COVID-19 pandemic, Jeffrey Epstein, and even “Pizzagate” in her feed. She was rewarded by the platforms with a wider audience.

As a start, Americans should demand that Facebook and other platforms take a few simple steps to show they are serious about combatting disinformation and rewarding quality information instead:

First, disclose the data: Social media companies have tried to turn their platforms into black boxes—with The New York Times reported just this week that Facebook is cutting back on how much data it shares with academics and journalists studying how the platform works.

Already, it’s impossible to know exactly what steps social platforms are taking to combat climate disinformation, how many false posts they’re filtering out, and what super-users they’ve flagged for special privileges. Friends of the Earth’s analysis of the winter storm in Texas was made much more difficult by the fact that we couldn’t even ascertain what controls and fact-checking the companies intended to have in place, much less whether they were actually following through. The “denominator” has deliberately been hidden, and lawmakers could stop this.

Second, stop the superspreaders: Instead of expecting individual users to get in screaming matches with superspreaders, we need to have clear community standards and enforcement policies. A small group of people are pushing the vast majority of disinformation. The Center for Countering Digital Hate found that a mere 12 accounts created 65% of vaccine disinformation. While Facebook attacked the group’s methodology, details of its own internal research that were leaked to the Wall Street Journal later confirmed that “a small number of ‘big whales’ were behind many antivaccine posts and groups on the platform.” Climate change is similar, where many of the worst offenders with high engagement have direct ties to the oil and gas industry. Social media platforms need policies that define this “coordinated, inauthentic behavior”, as the NSA terms it, and stop the reign of known liars.

Moves like Facebook’s commitment to spend $1 million dollars on grants combating climate misinformation pale next to its $86BN annual revenue.

Facebook and other platforms, should be compelled to institute a “two-strike” policy, after which repeat-offender superspreaders lose the virality functions of the platform unless they can prove that a post is factual. Virality is the product of human-designed algorithms and driven by Facebook’s profit motive. It’s a choice, not a natural consequence—and we can choose to take it away from known liars and obvious bullies. People who repeatedly spread lies- like Tucker Carlson- shouldn’t be given an additional megaphone. In fact repeat offenders like Carlson could be required to submit three verified sources before they are allowed to have climate change rants amplified ever again. Journalists have to do this basic fact-checking practice every day and this process can be transferred.

We’ve recently seen hard proof of the impact of this strategy--the de-platforming of QAnon and Trump in January caused a 73% drop in election-related disinformation.

Reward expertise, not shouting: In tandem, platforms can choose to amplify voices that are both truthful and nuanced. Again, platforms have made choices about the speech that algorithms currently reward. They could instead design functions that reward posts that are substantiated and well-reasoned, rather than simply incendiary. Facebook, Twitter, and other platforms could amplify voices on climate science based on long-accepted standards in the media and broader society, such as training and other qualifications, professional affiliations, and publication history. They also could introduce a footnote or sourcing function and reward users who use them, like Wikipedia has done for years. And they could create conversation spaces where the known bomb-throwers are not allowed in. Imagine what we do in most of our offline areas, and mirror that.

Platforms have the technological and financial capabilities to make these changes today. When they want people to see a message, they have the tools. In a hat-tip to George Orwell, Facebook will now be deliberately promoting “good news” stories about itself in users’ news feeds. We’d all be much better off if they used their ability to influence public discourse to promote quality news and information—rather than just trying to dupe us into thinking they did.

At the end of the day, haters gonna hate. But rewarding them with our attention is a choice the platforms make. The health of the planet and every living thing is at stake- these reforms are urgently necessary.

Authors

Michael Khoo
Michael Khoo is climate disinformation program director at Friends of the Earth and co-founder of UpShift Strategies. He has led research projects on how climate disinformation networks merged with Q-Anon, documented how Facebook and Twitter spread climate disinformation during the Texas blackout, h...

Topics