Will the Oversight Board Finally Get Meta to Enforce Its Own Policies on Anti-Trans Hate and Disinformation?
Jenni Olson / Sep 19, 2024Jenni Olson is the Senior Director of the GLAAD Social Media Safety program.
Last week Meta's quasi-independent Oversight Board finished collecting public comments for its new “Gender Identity Debate Videos” case. The Board will now deliberate about Meta’s decision to leave up two posts (one on Facebook and one on Instagram) that target transgender people. According to the Oversight Board case descriptions, the first post features:
...a video of a woman confronting a transgender woman for using the women’s bathroom. The post refers to the person being confronted as a man and asks why it is permitted for them to use a women’s bathroom.”
The second post features:
“...a video of a transgender girl winning a female sports competition in the United States, with some spectators vocally disapproving of the result. The post refers to the athlete as a boy, questioning whether they are female."
Meta evaluated the posts under its Hate Speech and Bullying and Harassment policies and decided they were not in violation. In both the Oversight Board case description and the Meta Transparency Center post about the case, the moderation decisions are characterized as hinging on Meta’s policies related to “calls for exclusion.” Meta explains its decision not to remove the posts saying that: “in both instances we determined there is no explicit call for exclusion present in the posts.”
Strangely absent from Meta’s explanation of its moderation process is any reference to a much more relevant clause in its Bullying and Harassment policy, which states that: “… all private minors, private adults (who must self-report), and minor involuntary public figures are protected from … Claims about romantic involvement, sexual orientation or gender identity.”
As GLAAD Social Media Safety Program Manager Leanna Garfield and I noted in our June 2023 Tech Policy Press feature on this topic (Understanding Targeted Misgendering and Deadnaming as Hate Speech), this specific Meta policy clearly prohibits targeted misgendering — which is a “claim about a person’s gender identity.” Specifically here, both posts are intentionally misgendering the targeted subjects and actually also denying their gender identities. In post #1 the account asserts that the transgender woman is a man, and post #2 claims the trans girl athlete is a boy. (As a reminder, what we’re talking about here is intentionally bullying and harassing trans people by misgendering them; this is not about accidentally getting someone’s pronouns wrong. This specific practice is a uniquely pernicious type of hate speech in that it mocks and expresses extreme animus towards the targeted figure, while disingenuously pretending to be a harmless joke or “debate;” targeted misgendering is the most prevalent form of anti-trans harassment across social media).
Even amidst Meta’s convoluted web of possible loopholes, the “claims about gender identity” policy should apply for post #2. The Oversight Board explains that Meta considers the targeted girl to be an involuntary public figure, so there is no requirement for self-reporting, nor is Meta’s overarching policy that excludes regular public figures from protection applicable. There is no indication about the public figure status of the targeted transgender woman in post #1, though according to the Oversight Board, Meta did say that she did not self-report the post (which Meta requires in the case of private adults, but not private minors and minor involuntary public figures).
If you’re still reading, you may be feeling a bit dizzy from trying to get your head around all the layers of detail here. There are yet more layers addressed in GLAAD’s public comment. While there are many interesting aspects to the case, this primary aspect above (protecting people from “claims about … gender identity”) should make the Oversight Board’s adjudication straightforward on at least this one front. Though given Meta’s array of loopholes and creative translations of its own policies, it will be interesting to see what kinds of new rationalizations for non-enforcement may arise.
In its January 2024 trans-related case decision, the Oversight Board cites GLAAD’s 2023 Social Media Safety Index analysis of such hate and harassment on Meta platforms, noting: “the very real resulting harms to LGBTQ people online, including a chilling effect on LGBTQ freedom of expression for fear of being targeted, and the sheer traumatic psychological impact of being relentlessly exposed to slurs and hateful conduct.” The Board then notes “Meta’s repeated failure to take the correct enforcement action” and urges the company “to close enforcement gaps, including by improving internal guidance to reviewers,” concluding that “the company is not living up to the ideals it has articulated on LGBTQIA+ safety.”
As GLAAD’s public comment on this current case observes, targeted misgendering is a form of hate speech. With malicious intent, it seeks to mock, denigrate, and dehumanize transgender, nonbinary, and gender non-conforming people in violation of Meta’s Hate Speech and Bullying and Harassment policies. It should be mitigated in accordance with all of Meta’s applicable policies and sub-policies; if such policies require evaluation by human moderators to be accurately enforced, then the company should prioritize and assign adequate resources to achieve the timely and thorough review of such content and fulfill the commitments it states in its public-facing policies.