Home

Donate
Perspective

Gendered Disinformation as Infrastructure: How Tech Billionaires Shape Political Power

Carine Roos / May 26, 2025

This post is part of a series of contributor perspectives and analyses called "The Coming Age of Tech Trillionaires and the Challenge to Democracy." Learn more about the call for contributions here, and read other pieces in the series as they are published here.

Gendered disinformation is not merely the spread of falsehoods about women; it is a structural and deliberate practice rooted at the intersection of misogyny, technological design, and political influence. Its impact goes beyond digital defamation campaigns: it shapes public perception, redefines democratic participation, and uses technology as a tool to advance authoritarian agendas.

Often anchored in deepfakes, sexualized disinformation, and coordinated attacks, this strategy aims to silence women who challenge power structures, undermine their public legitimacy, and discourage them from occupying decision-making spaces. These effects are intensified when the targets are racialized women, LGBTQIA+ individuals, or human rights defenders, revealing intersectional dynamics of oppression that operate through the layering of structural inequalities. In this context, gendered disinformation becomes a sociotechnical infrastructure: intentionally designed, maintained, and monetized by a small group of actors with immense technological power, the billionaires behind digital platforms, who not only shape the content of public debate, but also define the very boundaries of who can occupy that space with legitimacy and safety.

Although anti-gender movements gained traction in the 1990s, their strategies have been significantly amplified by today’s digital platforms. The 2017 attack on philosopher Judith Butler in Brazil, when far-right protesters harassed her under false accusations of promoting “gender ideology,” illustrates how social media platforms function as accelerators of moral panic and political polarization. Within days, fueled by platforms like X, more than 370,000 signatures were gathered in a petition against Butler’s visit to Brazil, claiming she posed a threat to the “natural order of gender, sexuality, and family,” even though she was not scheduled to speak on gender at any of the planned events.

YouTube’s algorithm, for example, has been recommending content from Red Pill creators and so-called “masculinity gurus,” many of whom portray feminist, Black, and LGBTQIA+ voices as threats to male identity. A NetLab investigation found that 80% of these channels in Brazil are monetized, profiting from ads, super chats, course sales, and direct donations, all while promoting gender-based hatred to thousands of viewers. By fueling this ecosystem, platforms recast feminist and LGBTQIA+ rights as threats to national identity, religious values, and the “traditional family,” distorting democratic debate through the mobilization of fear, resentment, and disgust, emotions that function as mechanisms of exclusion and social control. By monetizing this type of content, platforms like Meta and X amplify hate speech and implement institutional changes that make these environments even more permissive.

Both companies have taken concrete steps to dismantle their content moderation frameworks. In 2025, Mark Zuckerberg announced the end of fact-checking partnerships in the US, removed content restrictions related to gender identity and immigration, and relocated moderation teams from California to Texas, justifying the move as an attempt to “reduce ideological bias.” In an interview on Joe Rogan’s podcast, Zuckerberg lamented what he described as a lack of “masculine energy” in companies, framing Meta’s retreat from content regulation as a supposedly necessary ideological reorientation. In doing so, he publicly aligned with Elon Musk’s model at X, which had undergone a systematic dismantling of its integrity policies. Over two years, the platform eliminated rules aimed at curbing electoral, public health, and humanitarian disinformation, including guidelines on “informational harm.” Specific protections for trans individuals, such as bans on misgendering and deadnaming, were revoked. Stricter moderation measures were replaced with softer sanctions, such as merely limiting the reach of harmful content, a strategy Musk summarized as “freedom of speech, not freedom of reach.”

It’s worth noting that even before these statements, platforms like Meta and YouTube had already shown a tendency to prioritize deregulation over accountability. During Brazil’s 2022 elections, these platforms allowed the circulation of electoral ads containing explicit disinformation, as revealed by an investigation conducted by NetLab in partnership with Global Witness. Despite repeated warnings about content inciting violence and undermining democratic order, these companies profited from the viral spread of such material.

By reshaping the content moderation policies of X and Meta, Elon Musk and Mark Zuckerberg demonstrate how tech billionaires have acted as political agents. By discrediting reports of gender-based violence and loosening regulations under the guise of neutrality, they contribute to making digital spaces increasingly unsafe for historically marginalized groups.

The consequences are alarming. In Brazil, the impacts of gendered disinformation have been documented by initiatives such as MonitorA, which revealed that Black, trans, feminist, and Northeastern women candidates were frequent targets of gender-based political violence during the 2020 and 2022 elections. Congresswoman Benedita da Silva, for instance, was subjected to racist and misogynistic slurs that dehumanized her by comparing her body to animals. Trans candidate Duda Salabert received death threats, was consistently misgendered, and was attacked with transphobic messages.

Gendered disinformation systematically erodes the conditions under which women can access and remain in institutional spaces. The case of Manuela D'Ávila, as documented by Lucina Di Meco in Monetizing Misogyny, illustrates the severity of these impacts. During her mayoral campaign in Porto Alegre, she was targeted with fake news about corruption, fabricated luxury trips abroad, and rape threats against her young daughter. The attacks escalated to the point that Manuela considered going into exile and withdrew from future candidacies.

Journalist Patrícia Campos Mello was also the target of coordinated defamation campaigns, falsely accused of exchanging sexual favors for information, an explicit attempt to discredit her journalistic work. As highlighted in the report Big Tech and Misogyny in Brazil, this tactic illustrates how digital violence is strategically used to undermine the credibility and safety of women in public life.

These attacks are part of a transnational trend that spans continents. In countries such as Hungary, Italy, India, and Tunisia, women who challenge authoritarian structures or advocate for women's rights have been recurrent targets of coordinated disinformation campaigns and digital violence. In Hungary, parliamentarian Ágnes Kunhalmi was subjected to misogynistic photo montages and unfounded accusations of foreign funding. In Italy, former minister Valeria Fedeli saw her educational initiatives distorted and falsely framed as child pornography, facing orchestrated attacks by far-right groups. In India, parliamentarian Priyanka Chaturvedi received rape threats against her 10-year-old daughter after a fake quote was circulated in her name. And in Tunisia, lawyer and politician Bochra Belhaj Hmida was threatened with public stoning for her advocacy for gender equality. Whether it's Ágnes Kunhalmi in Hungary or Bochra Belhaj Hmida in Tunisia, the strategy remains the same: discredit, intimidate, and silence.

The connection between these cases lies not only in their misogynistic content but in the technological architecture that sustains it. Gendered disinformation is not a system glitch, it is a core feature of platforms that monetize engagement at all costs. Tech billionaires shape not only the content of public debate but also its boundaries. Their platforms determine not just who gets to speak in the digital public sphere, but also who bears the consequences for doing so.

Therefore, tackling gendered disinformation requires more than the removal of harmful content. It demands a structural rethinking of platform governance, the establishment of robust regulation of the attention economy, and effective accountability for Big Tech’s active role in the erosion of democracy. As long as digital spaces remain hostile to marginalized voices, democracy will continue to be structurally biased and profoundly unequal.

Authors

Carine Roos
Carine Roos is a researcher and human rights educator specializing in gender, technology, and digital governance. She holds an MSc in Gender from the London School of Economics and focuses on the intersections between gendered disinformation, platform accountability, and democratic resilience in the...

Related

Perspective
From Incels to Mercenaries: When Online Hate Becomes Real-World ViolenceMay 6, 2025
Analysis
What a New Study Reveals About Content Moderation in TigrayApril 21, 2025

Topics