Home

Twitter Plunges on Annual Scoring of LGBTQ Safety on Social Media

Justin Hendrix / Jun 15, 2023

Justin Hendrix is Editor and CEO of Tech Policy Press.

For the third year, the LGBTQ media advocacy organization GLAAD has released a Social Media Safety Index (SMSI) that finds that major tech platforms are failing to keep LGBTQ+ users safe. The report was released at a time when the broader social and political context continues to grow more dangerous- in the US, more than 75 anti-LGBTQ bills were signed into law in state legislatures this year, and attacks both on- and offline are on the rise.

Based on twelve indicators and a methodology designed in partnership with the nonprofit group Ranking Digital Rights and Goodwin Simon Strategic Research Group, the SMSI assigns social media companies a score based on its performance against the indicators.

While none of the platforms achieve a passing score, Twitter is worst in class, with its score plunging 12 points compared to the prior year, to 33%. The company’s owner, Elon Musk, has used the platform to promote “bigoted and anti-LGBTQ rhetoric,” according to the watchdog group Media Matters for America, and the GLAAD report chronicles changes to Twitter’s policies under Musk that further endanger LGBTQ safety.

● Instagram: 63% (+15 points from 2022)
● Facebook: 61% (+15 points from 2022)
● TikTok: 57% (+14 points from 2022)
● YouTube: 54% (+9 points from 2022)
● Twitter: 33% (-12 points from 2022)

GLAAD SMSI 2023

The SMSI report draws a connection between online hate and harassment and real world harms, including violence. GLAAD says the platforms profit from anti-LGTBQ hate. “The decision to allow anti-LGBTQ hate on their platforms not only benefits the grifters and bigots who post it, it also benefits the companies themselves.” The report cites studies that draw a connection between such content and platform profits from anti-trans campaigns, in particular.

Layoffs at the platforms have hit trust and safety teams especially hard. One Twitter employee working on content moderation told NBC News that “[f]ewer people means less work is being done in a lot of different spaces.” I asked Jenni Olson, Senior Director of the GLAAD Social Media Safety Program, and one of the authors of the report, whether GLAAD and its allies are finding it more difficult to get the attention of tech firms when they identify harmful or dangerous posts that violate their policies. Olson said the companies are still getting their reports of potentially violative content, and that “rather than staffing issues, the greater problem seems to be their interpretation and enforcement of their own policies.”

“For instance, Meta's hate speech policies specifically prohibit anti-LGBTQ ‘groomer’ content—the baseless, false, and dangerous conspiracy theory asserting that LGBTQ people are threats to children,” said Olson. “Their policy is very clear, and the company has made public statements underscoring this. And yet Meta consistently does not enforce the policy, including in their allowance of multiple ‘Gays Against Groomers’ accounts which continue to exist across Facebook and Instagram despite containing the violative language in their account names and being almost exclusively devoted to perpetuating this bigoted and dangerous lie.” Olson pointed to a recent ADL Center on Extremism report that recently identified that account as “an anti-LGBTQ extremist coalition."

Notably, however, Meta properties Facebook and Instagram both improved on their 2022 scores, following Meta’s adoption of a prohibition of targeted misgendering, improvements in its targeted advertising policies, and moves to train moderators. TikTok also improved on key measures.

And yet given the generally poor performance of the platforms, I asked Olson whether there is a sense of diminishing returns in particular to working with firms such as Twitter, under ownership that appears to have taken a decisive turn towards derogatory, if not hateful, speech against the LGBTQ+ community.

“While this kind of platform accountability advocacy work can be extremely demoralizing and even at times feel literally hopeless — we know that it could be so much worse if we were not here standing up to these companies,” said Olson. “We monitor and report and communicate with them all on a weekly if not daily basis. They know we are paying attention. They know we are calling them out. And they know full well that their products are unsafe for LGBTQ people, not to mention everyone. As platforms like Twitter get even worse, it’s even more important that we hold the line and continue to be a watchdog, along with so many other organizations and individuals doing this work.”

Citing a recent 2023 GLAAD Accelerating Acceptance study, Olson noted that a supermajority of non-LGBTQ Americans agree that LGBTQ people not be discriminated against.

“All of this anti-LGBTQ hate and disinformation — is not reflective of who we are as a country,” said Olson. “All of this anti-LGBTQ hate — we simply all have to continue to stand up against it and to show up with our actual values around pluralism and respect for people’s basic rights. So that’s what we’ll continue to do.”

Authors

Justin Hendrix
Justin Hendrix is CEO and Editor of Tech Policy Press, a new nonprofit media venture concerned with the intersection of technology and democracy. Previously, he was Executive Director of NYC Media Lab. He spent over a decade at The Economist in roles including Vice President, Business Development & ...

Topics