Home

Donate
Perspective

Why Marginalized Areas Bear the Brunt of the Disinformation Crisis

David Nemer / May 1, 2025

David Nemer is the author of Technology of the Oppressed: Inequity and the Digital Mundane in Favelas of Brazil.

In the heart of Vitória, Brazil, there is a collection of favelas (urban shantytowns) known as the Território do Bem (Territory of Good)—a name given by its residents that reclaims dignity in a cityscape often marked by neglect. It is here that the CalangoLab team and I conducted a participatory study to understand how technology shapes information access and misinformation exposure among marginalized communities.

The findings of our report, Technology and Disinformation in the Território do Bem, are as revealing as they are urgent. They highlight a digital landscape defined not by inclusion but by containment. In this landscape, Meta, through its platforms such as WhatsApp, Instagram, and Facebook, dominates not only access but also shapes what people believe to be true. Yet, the community is far from passive. People know when they're being misled. They know who holds the power. And most importantly, they know who's being left behind.

The Digital Cage

Smartphones are the gateway to the Internet for nearly all residents—92.8% of our 404 respondents own one. But the way people connect matters. Most rely on mobile data plans, which are often limited in terms of scope, cost, and speed. Many fall into "zero-rating traps"—a practice where telecom companies provide access to certain apps (such as WhatsApp or Facebook) without incurring data usage. While this is often sold as a form of digital inclusion, it effectively restricts the Internet to tightly controlled walled gardens where disinformation circulates with little context or challenge.

For example, when disinformation is shared through a WhatsApp message—often stripped of its source, presented without links or citations—users may have no choice but to take it at face value. Even if they want to fact-check it, their data plans may not allow it. Nearly a quarter of our respondents report never verifying the accuracy of a message before sharing it. And among those who do, many rely on Google searches or intuition—"if the content makes sense"—which isn't always enough to stem the tide of false narratives.

The platforms, particularly those owned by Meta, such as WhatsApp, Facebook, and Instagram, are ubiquitous. WhatsApp is used by 79.7% of respondents, Instagram by 72%, and Facebook by 47.5%. These platforms are not just communication tools but the primary interfaces through which people access and experience the Internet. The idea that the Internet is the open, global information commons we once imagined does not hold true in places like the Território do Bem, just as it does not in other parts of the Global South. Instead, people experience a corporate, siloed version of the web.

Perceptions Matter: They Know It's a Problem

Contrary to harmful stereotypes that paint poor and marginalized populations as passive or uninformed consumers of disinformation, our study shows that residents of the Território do Bem possess a critical awareness of their digital environment.

Over 90% of respondents believe that social media platforms promote disinformation. Yet, despite this distrust, these platforms remain central to how residents access information. Television remains the most trusted source (77.99%), but Instagram (51.98%) and WhatsApp (39.9%) are close behind, especially for everyday updates and community-related news.

This reveals a striking contradiction: 61% of participants say they trust traditional journalistic sources more than what they see on social media, but it is precisely through social media that they receive most of their information. This paradox is not the result of ignorance—it is a product of infrastructure. If the only digital spaces available to you are WhatsApp and Instagram, and if your data plan doesn't allow you to browse freely or read full articles, then your choices are limited not by preference but by design.

The platforms know this. And they benefit from it.

Who Is Responsible?

When asked who should bear responsibility for stopping disinformation, 58.42% of participants said both individuals and platforms share that burden. A smaller portion (14.85%) placed the responsibility squarely on the platforms, while a majority—55.22%—believe that individuals should verify the content they consume and share. But here again, we see another contradiction: while over half accept personal responsibility for verifying information, only 20.77% report actually doing so.

It's not that people don't care. It's that they're navigating an impossible terrain—one where critical thinking is essential but unsupported by the tools or access necessary to put it into practice. In that sense, the platforms' hands-off approach to disinformation disproportionately harms those already vulnerable.

The consequences are not abstract. Misinformation around public safety, health, and politics has led residents to skip work, avoid school, or even miss medical appointments. In a territory where missing a day of work can mean losing crucial income—or even a job—and where children and teenagers who skip school lose access to essential daily meals, deepening food insecurity, this is not merely a problem of "fake news." It is a problem of social harm, structural inequality, and injustice.

Regulation Is Not Optional—It's Urgent

Our study underscores the urgent need for regulatory frameworks that protect users not just from content but also from systemic exploitation. The monopolistic control of platforms like those owned by Meta means that residents in marginalized communities are not just users—they are data generators for a corporate empire with little accountability.

This is particularly alarming when we consider how little these platforms do to combat disinformation at the local level. Our respondents reported that disinformation related to COVID-19, politics, and crime was especially prevalent. These narratives are not merely annoying or disruptive—they're dangerous. They erode trust in institutions, delegitimize the democratic electoral process, suppress voter participation, and provoke fear that can destabilize communities.

The current debate over platform regulation often centers on abstract notions of free speech or corporate innovation. But in places like the Território do Bem, the stakes are concrete. The absence of regulation doesn't protect speech—it protects power. It protects platforms' right to profit from chaos and confusion while deflecting blame onto users who lack the resources to meaningfully resist.

And make no mistake: when regulatory gaps exist, they hit hardest in the favelas, the peripheries, and the marginalized areas. These communities already struggle with underfunded public and private infrastructure, limited access to healthcare and education, and high levels of social stigma. To leave them at the mercy of unchecked algorithmic systems is not just negligent—it's inhumane.

Public Policy for Structural Change

The data from this research demands more than superficial media literacy campaigns or one-off awareness workshops. We must recognize disinformation as a structural problem rooted in inequality, corporate monopolies, and digital exclusion.

Public policy must address this holistically. That means:

  • Promoting open and equitable internet access, breaking the zero-rating model, and ensuring people can navigate the Internet freely and securely.
  • Investing in localized digital literacy programs, co-created with community members and tailored to their realities.
  • Mandating transparency and accountability for platform algorithms, especially those that govern content visibility and engagement.
  • Supporting community media initiatives like Calango News, which offer residents locally relevant and trustworthy information, often in stark contrast to the clickbait culture of corporate networks.

These are not luxuries. They are necessities if we want a democracy that includes everyone.

A Call for Global Responsibility

What's happening in the Território do Bem is not an isolated phenomenon. It mirrors the dynamics in favelas across Brazil, townships in South Africa, informal settlements in India, and low-income neighborhoods in the United States. In each case, the same platforms are present, the same patterns of disinformation emerge, and the same lack of accountability persists.

As researchers, policymakers, and citizens, we must ask: Whose Internet are we building? Who is it for? And who pays the highest price when it fails?

It is time to stop treating disinformation as a user behavior problem and start seeing it for what it is: a structural, infrastructural, and systemic problem engineered by design. Regulation is not about limiting freedom. It is about ensuring equity. It is about refusing to accept that the most vulnerable should always be the most expendable.

In the Território do Bem, people know the score. They know the platforms promote disinformation. They know they have some responsibility. But they also know they are up against giants.

Now it's time for the giants to be held accountable.

Authors

David Nemer
David Nemer is an Associate Professor in the Department of Media Studies and an Affiliated Faculty in the Department of Anthropology at the University of Virginia. He is also a Faculty Associate at Harvard University's Berkman Klein Center for Internet and Society (BKC). Nemer is the author of Techn...

Related

Perspective
Big Tech, Bolsonarism, and the Erosion of Democracy

Topics