Home

Donate

Amidst Flurry Of Anti-DEI Measures, Meta’s Content Moderation Policies Will Harm People With Disabilities

Ariana Aboulafia / Feb 19, 2025

Ariana Aboulafia is a fellow at Tech Policy Press.

Mark Zuckerberg's Facebook account is displayed on a mobile phone with the Meta logo visible on a tablet screen in this photo illustration on January 7, 2025. (Photo by Jonathan Raa/NurPhoto via Getty Images)

On January 7, 2025 – one day after the certification of the results of the 2024 United States election – Meta CEO Mark Zuckerberg announced changes to content moderation policies that will impact users across platforms such as Facebook, Threads, WhatsApp, and Instagram. These changes – which were part of a series of decisions made by Meta and other tech companies to placate President Donald Trump – are harmful for marginalized users, including women, people of color, and especially LGBTQ+ people. Organizations like the Human Rights Campaign have noted that this is because the new policies specifically allow for users to refer to LGBTQ+ people as mentally ill as a means of insulting them (among other forms of newly allowed harassment) and because they include the end of Meta’s fact-checking partnerships.

While Meta leaders have claimed that these changes will return the platforms to their “roots” by prioritizing free speech, it is more likely that these policies will make platforms hostile environments for certain marginalized people – the very same people currently being targeted in a wave of anti-DEIA policies (that is, those that attack programs related to diversity, equity, inclusion, and/or accessibility). In practice, these changes will prioritize the speech of some while chilling the speech of others. Those who will have their speech chilled will not only be LGBTQ+ people but also other marginalized groups – including people with disabilities.

Content moderation policies that negatively affect disabled people and perpetuate ableist stereotypes, both on Meta and other platforms, are not new. However, while Meta and other platforms have made progress in recent years towards making their platforms more inclusive, these new policies represent a step in the wrong direction – one that impacts the health, safety, and free expression of disabled people.

In 2021, for example, the New York Times reported that Meta platforms were rejecting advertisements placed by small businesses that sold clothing items specifically for people with disabilities, mostly on the grounds that they violated policies by advertising medical devices. The article features several businesses that had this experience, including one making adaptive fashion for wheelchair users. It notes that an ad featuring a pair of pants in a “standing fit” (that is, a model who was standing up) was accepted, but an ad showing the same pants on a disabled person in a wheelchair caused Facebook to reject the post. These content moderation practices for advertisements – which generally follow the broader content moderation and community standards policies of Meta platforms – have disproportionately chilled the speech of disabled people and have caused some to feel that they are experiencing a form of “shadow banning” based on either their own disabilities or their decision to feature disabled people in their content.

The new content moderation rule changes for Meta platforms will impact disabled users in several ways. For example, the company’s decision to end fact-checking partnerships (in the US, for now) will likely allow vaccine and health-related misinformation and disinformation to flourish on Meta platforms. Disabled people can be uniquely harmed by vaccine misinformation and disinformation. If vaccine misinformation contributes to lower vaccination rates – and there is evidence that it does – immunocompromised disabled people are at disproportionate risk of death or disease. The same can be said about health misinformation – indeed, in 2023, the then-head of the Food and Drug Administration stated that he blamed health misinformation and disinformation for lowering US life expectancy. Platforms that allow this sort of misinformation and disinformation to flourish harm everyone – particularly people with disabilities. In addition, the false claim that vaccines cause autism (and that parents should avoid vaccines for their children, lest their children become autistic) perpetuates the idea that being neurodivergent is a bad thing and that it should be avoided at all costs—including at the expense of the child’s health, wellbeing, or even life.

Similarly, the new content moderation rules allow individuals to use mental illness as a pejorative (particularly in reference to LGBTQ+ people), which perpetuates the idea that having a mental health-related disability is a bad thing and that it is acceptable to use a disability as an insult. This especially harms disabled queer and trans people, but the normalization of disability as an insult harms all people with disabilities, and the goals of the broader disability rights and justice movements, as well. This change from Meta is particularly disheartening in light of the resurgence of the ‘R-word,’ in both physical and digital spaces. This has partially been spurred by Elon Musk, who has maligned critics on the basis of disability and who personally uses the word frequently on the platform that he owns, X (formerly Twitter). Indeed, one study found that the use of the slur tripled after one of Musk’s posts where he used the word (and noted that Google Trend Analytics showed a similar uptick in online search interest after Musk’s post). These factors together will contribute to the creation of an online environment that suppresses the speech of people with disabilities – a far cry from the “free speech roots” that Meta claims to be preserving through these changes.

That’s the thing about content moderation policies – they are more than just the arbitrary choices of tech behemoths. Instead, they are a set of normative decisions that reflect and contribute to senses of right and wrong, of what belongs and what does not. When the owner of a platform repeatedly uses an ableist slur, other people on the platform think it’s okay and use it more often themselves. When a company’s rules allow users to inflict emotional harm on other users on the basis of identity, some people will stop using that company’s platforms – which could chill their speech and their ability to participate in public discourse.

Some people with disabilities – and LGBTQ+ people, and other users who consider themselves allies to these communities – will stop using Meta platforms out of protest or out of a desire to preemptively protect themselves from harm. Some will choose to stay, also potentially out of protest or out of a desire to retain connections to others in an accessible way. This is a personal decision, but one that has become more difficult now than it was mere months ago – because content moderation policies and practices have real-world consequences both for users and for the platforms themselves.

Meta is a private company, as are most other social media platforms. The First Amendment protects its right to enact the sorts of content moderation policies that its leadership sees fit, including by making these sorts of policy shifts – but, that doesn’t mean that they should, or that these policies are somehow protecting free speech. They shouldn’t, and they aren’t – rather, they are inhibiting the flow of ideas from diverse perspectives and, in doing so, are worsening the experience for all users.

There is a different way forward for Meta and any other platform considering similar content moderation decisions. In 2020, TikTok made headlines after documents were published that showed that it instructed content moderators to suppress videos featuring disabled people, among others. Specifically, moderators were told to exclude from TikTok’s “For You” page any video that featured someone with an “abnormal body shape,” or a “facial deformity,” or anyone who was “obese or too thin.” This policy obviously impacted disabled TikTok creators, and TikTok users who were interested in this content but unable to organically access it. After heightened attention from the press, TikTok reversed these policies; afterward, TikTok grew into a platform that eventually supported many disabled influencers with large followings and disabled community members – so much so that some disabled people argued against the TikTok ban, stating that TikTok was a critical safe space for them.

The lesson here for Meta, and for other platforms, is that it is possible for a platform that once had content moderation policies that were discriminatory to change course and become a welcoming space for the very populations they once most harmed. Meta can make this choice, and should – even if it isn't likely that it will anytime soon. But, even more than that, the lesson here for all platforms is that their content moderation decisions matter, that they can be a lever through which platforms choose to create spaces that show disabled people – and other marginalized groups – that they do belong. Rather than contributing to a culture of exclusion – even one that may feel like it is both spurred and sanctioned by those in power – social media platforms can choose to use content moderation decisions and other policies to create the safe spaces that are needed now more than ever. These decisions may even come with the added bonus of building a more active and faithful user base – a win-win for users and platforms.

Authors

Ariana Aboulafia
Ariana Aboulafia leads the Disability Rights in Technology Policy project at the Center for Democracy & Technology. Her work currently focuses on maximizing the benefits and minimizing the harms of technologies for people with disabilities, including through focusing on algorithmic bias and privacy ...

Related

Meta's Content Moderation Changes are Going to Have a Real World Impact. It's Not Going to be Good.

Topics