Home

Donate

How Zuckerberg Reminded Everyone That Meta Doesn’t Care about the Global Majority

Shashank Mohan, Afia Jahin, Shahzeb Mahmood / Jan 30, 2025

Meta founder and CEO Mark Zuckerberg attends the inauguration ceremony where Donald Trump was sworn in as the 47th US President in the US Capitol Rotunda in Washington, DC, on January 20, 2025. (Photo by KENNY HOLSTON/POOL/AFP via Getty Images)

Meta’s decision earlier this month to discontinue its third-party fact-checking program in the US and to replace it with a user-driven mechanism akin to X’s Community Notes echoes enduring concerns long apparent to many for years now: that the preferences, priorities, and predicaments of user bases outside Northern America and Europe are invisible to Meta. This is, of course, despite the fact that the majority of the world’s internet users reside in low- and middle-income regions outside the Global North.

Just under two weeks before Donald Trump’s inauguration as President of the United States, Meta founder and CEO Mark Zuckerberg released a video announcement asserting that Meta had been “censoring” too much online speech and that the company would make a variety of policy changes, including terminating fact-checking and relaxing its posture on misinformation and hate speech. While a Meta executive recently claimed there is no plan to end fact-checking outside of the US for now, if history teaches us anything, what is currently a policy decision piloting in the US will eventually be deployed globally, affecting all of the over 3 billion users worldwide. These users are predominantly distributed across culturally, racially, ethnically, religiously, and politically diverse regions in the Global Majority with vastly different online information ecosystems.

Is there evidence of the efficacy of the fact-checking program in the Global Majority?

Fact-checking as a process is a longstanding practice inside newsrooms worldwide. It received fresh momentum, especially outside of the Global North, when Meta and Google—and subsequently, the donor community—put resources behind it. Many commentators and experts have already debated the efficacy of Meta’s fact-checking program, but what remains true is that it was never intended to be a silver bullet. Rather, it was conceived as a palliative measure complementing other systemic interventions. Although the program is a band-aid solution to distrust in the information ecosystem—a problem perpetuated mainly by social media platforms themselves—it has offered a critical tool to identify and address harm by providing a steady and objective reference point, especially during crisis and civic events in under-resourced regions.

Error rates provide a lens to assess efficacy, albeit one tempered by certain shortcomings. Meta conceded in late 2024 that of 172,550 posts demoted as misinformation that were processed by fact-checkers, only 5,440 posts—approximately 3%—were restored. Notably, fact-checkers merely identify, flag, and label inaccurate information, while the decision to remove content rests solely with Meta. An illustrative example of systemic issues in the platforms’ internal designs, processes, and systems is its repeated failure to detect and remove disinformative test ads. At the very least, this indicates that there is no empirical evidence supporting Zuckerberg’s stance on the supposed inefficacy of the third-party fact-checking program.

As such, the decision to move to a Community Notes model means that rather than trained and certified professionals assessing the accuracy of online information, fact-checking will be crowd-sourced to dispersed users who bring their own prejudices, biases, cultural contexts, and lived experiences to the debate. Despite the primary appeal of democratization and consensus-building inherent to Community Notes, it remains susceptible to groupthink, mob mentality, bias, inconsistent standards, coordinated manipulation, and an overreliance on opaque algorithms that determine the priority and visibility of content and notes. Furthermore, this crowd-sourced model is likely to be ineffective in combating misinformation, as the slow process of reaching consensus allows false content to spread unchecked in the crucial minutes and hours after it is posted. Communities alone cannot—and should not—shoulder the significant burden of discerning truth from falsehood, especially given the expertise required to navigate the ever-evolving landscape of digital propaganda.

Crucially, this is not a debate about the comparative efficacy of the two methods. Both the third-party fact-checking program and Community Notes have their strengths and weaknesses, and no evidence-based studies have conclusively demonstrated the effectiveness of one over the other, particularly in the context of the Global Majority. Effectiveness, rather, is relative and contextual, influenced by factors such as regional dynamics, funding levels, socio-economic and political conditions, and other variables, including the interaction of fact-checking with other content moderation mechanisms. What is more critical in this discourse is recognizing that the third-party fact-checking program and Community Notes were both introduced as reactive measures, driven by shifting political dynamics in the US and designed to cater to domestic political agendas rather than addressing underlying structural issues. By neglecting the diverse voices and needs of communities in the Global Majority—where the majority of users reside—Zuckerberg appears to have aligned with President Trump’s MAGA and America First policies, effectively relegating Global Majority communities to an afterthought in their design and implementation.

If Meta truly prioritized empowering people with a “voice” and upholding free speech, it would not have resorted to breadcrumbing a patchwork of inconsistent and ineffective solutions over and over again, instead devising a more holistic and robust strategy. It appears Meta’s leader at the helm is more focused on polishing the hood rather than fixing the engine, deflecting attention from the deeper mechanical failures—the dysfunctions within the company’s designs, processes, and systems.

Why is this policy shift potentially bad news for the Global Majority?

First, the nuanced local expertise embedded in the third-party fact-checking program enables fact-checkers to contextualize the social, cultural, and linguistic diversity present in the Global Majority, one that would be difficult to replicate with a global, crowd-sourced model. For example, the term “gau rakshak” (or “cow protector”) in India carries deep socio-political connotations tied to religious vigilantism. Misinterpretation or miscontextualization, especially of content in vernacular or containing idiomatic references, could lead to the wrong content being flagged or allowed, inflaming tensions rather than resolving them.

Furthermore, the Global Majority is more susceptible to coordinated manipulation and weaponization of crowd-sourced systems by politically or ideologically motivated groups. For instance, during the 2020-21 farmers’ protests in India, misinformation spread rapidly from both sides. Similarly, in the 2022 Brazilian elections, disinformation about candidates proliferated online, and in Kenya, misinformation—ranging from claims that the virus only impacts the rich to alcohol is a remedy—gained widespread traction. A crowd-sourced model could be easily exploited by partisan groups to amplify biased narratives, especially during crisis situations, whereas professional fact-checkers usually provide more balanced, verified information to counter these claims.

Second, crowd-sourced systems are inherently vulnerable to dominant or entrenched political bias or groupthink, amplifying the majority’s perspective at the cost of sidelining marginalized voices and exacerbating mis- and disinformation. For example, Bengalis in Bangladesh (98% of the population) often marginalize indigenous communities in the Chittagong Hill Tracts. In India, the Hindu majority (80% of the population) frequently dominates Muslim voices, while in Bangladesh, Muslims (90% of the population) sideline Hindus, Buddhists, and Christians. In Sri Lanka, Muslims have faced discrimination, harassment, and violence for over a decade at the hands of Sinhala-Buddist nationalists. Anti-Rohingya sentiment in Myanmar fueled widespread hate speech, which a crowd-sourced model could amplify as local users often lack the neutrality required for fair moderation.

Third, the third-party fact-checking program and Community Notes are operationally distinct: the former relies on a standardized and consistent approach to engaging credible local organizations, while the latter depends on crowd-sourced input. Especially in under-resourced regions of the Global Majority, limited foundational and digital literacy would hinder effective and equitable participation, creating a vacuum where more educated and technologically equipped urban users may dominate contributions. This skews outcomes and sidelines rural and marginalized perspectives, making this wholesale shift to a crowd-sourced model insensitive to the unique needs of the Global Majority.

Both the third-party fact-checking program and Community Notes serve as provisional stopgaps that other processes should complement. However, the policy shift, devoid of consideration for these and other critical factors, underscores an approach that favors reactive crisis management over proactive systemic integration of safeguards into the company’s designs, processes, and systems and treatment of the Global Majority as a postscript in Meta’s decision-making processes and strategic priorities.

What are the likely reasons for backpedaling on the third-party fact-checking program?

Zuckerberg’s video statement makes for interesting viewing, and it would be remiss not to recognize the calculated masterstroke that the decision represents in the current political climate and the evolving dynamics of platform governance.

On one hand, his wrangle with President Trump—stemming from the initial decision to indefinitely suspend him after the January 6, 2021, attack on the US Capitol—strained his influence, especially in the shadow of his tech billionaire counterpart Elon Musk’s growing prominence. President Trump, who accused social media platforms of censorship during his first administration, suggested that the policy change was “probably” a response to his threats against Zuckerberg and Meta. Experts concur, viewing Zuckerberg’s actions as pandering to MAGA narratives to regain President Trump’s favor, appeal to right-wing acrophily, reassert his relevance, and promote America First ideology that, unsurprisingly, mirrors his own worldviews. Mimicking Musk, Zuckerberg’s comments on media pressures and censorship echo MAGA rhetoric, signaling his ambition to join the elite ranks shaping political power.

In making his move, Zuckerberg reportedly consulted Meta board member Marc Andreessen, also a Trump supporter, and a handful of Meta executives, including Joel Kaplan, Meta’s newly appointed head of global public policy and a longtime Republican operative. Of course, Zuckerberg did not consult his own employees or representatives from civil society groups, as he has done in the past. The outsized influence of Zuckerberg’s own whim in these decisions, and thus over the online experiences of billions of Global Majority users, is cemented by his control of over 60% of voting power despite holding only 13% of Meta’s stock, a circumstance he bragged about on Joe Rogan’s podcast just days after the announcement.

On the other hand, the new Trump administration marks a shifting zeitgeist for tech regulation. Zuckerberg has been blunt in his criticisms, accusing Europe of institutionalizing censorship, Latin America of operating secret courts, and the Biden administration of censorship—claims likely driven by the intense regulatory scrutiny faced by tech companies over the last four years. Meta’s decision to terminate the third-party fact-checking program in the US signals the start of a long-term structural shift in content governance, building on the downsizing of trust and safety teams in 2023 and set to crystallize over the next four years. Regardless of which administration holds power, by then, it will be too deeply entrenched to be dismantled easily.

Furthermore, by adopting a “democratized” moderation system that relies on the collective wisdom of crowds, Zuckerberg aligned with common rhetoric around democracy and freedom of expression, two cornerstone ideals of the US foreign policy, which may enable the company to resist content removal requests from foreign governments. Currently, variants of the crowd-sourced model are used by other US companies, including Wikipedia, X, and YouTube. With Meta joining this bandwagon, this could mark the emergence of a new content governance paradigm.

Zuckerberg’s decision also serves a clear commercial purpose. As a profit-driven enterprise dependent on platform traffic for revenue, polarizing content is likely to increase engagement and, in turn, boost the company’s earnings, while shutting down the third-party fact-checking program reduces Meta’s ongoing capital investment in content moderation.

According to Zuckerberg, Meta’s online platforms should allow the same sort of speech that can appear on television and the floors of Congress. However, this comparison overlooks a crucial distinction: when journalists or politicians make harmful statements in public fora, audiences can contextualize these remarks within political discourse, and mechanisms exist to hold them accountable. On online platforms like Facebook and Instagram, misinformation and hateful conduct are initiated on private, unaccountable platforms by the average citizen—friends, colleagues, family, and neighbors. This can lead to offline harm, especially in the Global Majority, where digital literacy remains a work in progress.

Bidding adieu to the third-party fact-checking program demonstrates an alarming indifference to the needs of its global user base, effectively writing off the majority of its users who often lack digital literacy and require nuanced consideration of their complex and diverse social, cultural, political, and linguistic contexts. Effectively, this has relegated the Global Majority to a mere afterthought. Although Meta’s top executives may believe otherwise, social media experiences are neither universal nor homogenous—users in the Global North have markedly different experiences compared to those in the Global Majority, and diversity within the Global Majority means that user experiences vary significantly across countries and regions.

Perhaps, in a few years, with another shift in the winds of US politics, Meta may once again feign an understanding of the marginalized groups who keep its platforms running. On his swearing-in, President Trump was flanked by tech oligarchs, indicating the changing tides of the global order. One could reasonably argue that Meta’s policy shift is a microcosm of the technology sector’s trajectory—wherever that may lead. The question remains: where does the Global Majority stand on this crossroad? For now, men like Zuckerberg, Musk, Andreessen, and Kaplan continue to play an outsized role in determining the online experiences—and thus the politics—of the Global Majority populace.

Authors

Shashank Mohan
Shashank Mohan is a Tech Policy Fellow with Tech Global Institute and Associate Director at the Centre for Communication Governance at National Law University Delhi. Shashank’s work focuses on data justice, platform accountability, the impact of artificial intelligence on society, and access to digi...
Afia Jahin
Afia Jahin leads communication at Tech Global Institute. Afia has previously worked as an editorial assistant with the op-ed section at The Daily Star in Bangladesh. She is a writer and content strategist focusing on projects which help shed light on ongoing political crises and human rights concern...
Shahzeb Mahmood
Shahzeb Mahmood is the Head of Research at Tech Global Institute, and a Barrister of the Honorable Society of Lincoln’s Inn. Shahzeb’s research focuses on platform accountability, cyber intrusion technologies, and antitrust actions and strategic litigation, specifically in the Global South context. ...

Related

Mark Zuckerberg’s Immoderate Proposal

Topics