How Platform Shifts on Content Moderation Are Escalating Harm in the India-Pakistan Crisis
Usama Khilji / May 2, 2025Where else can you find the largest share of the world’s population awash in war-related misinformation and memes, if not South Asia?
With the shifting priorities of tech platform owners, and changes in what is considered free speech and how far content would be moderated, especially hate speech and disinformation, the nature of conflicts in the digital age will also evolve, and potentially become more dangerous, as the India-Pakistan escalation in the past week illustrates.
Indians and Pakistanis are no strangers to war, having fought three major wars and several minor skirmishes on the border since the British-administered bloody partition of India in 1947. As recently as 2019, there were airstrikes from both militaries in each other’s territories, whilst #SayNoToWar trended on social media as citizens from both sides urged restraint.
The narrative is not so peaceful this time after India claimed Pakistan for orchestrating a terrorist attack in Pahalgam in the Indian Administered Kashmir that took 26 lives on April 22. Several varying accounts of how the tragedy unfolded have been going viral on social media, soliciting calls in India for all-out war from the corporate media and amplified on social media. There are many different versions of what transpired, with several accounts painting a communal picture that has caused frictions between Hindu and Muslim communities across India, with Kashmiri students bearing the brunt of it the most.
For context, Jammu and Kashmir is a disputed territory where the United Nations has called for a plebiscite for self-determination by the people of Kashmir. However, the region remains divided between India and Pakistan, and a small part is under Chinese control. The Kashmiri movement for self-determination suffered a blow when the Modi-led government in India voted to scrap Article 370 of the Constitution, thereby ending the semi-autonomous status of Indian Administered Jammu and Kashmir. India maintains that Pakistan supports a separatist movement in Jammu and Kashmir, and has also pointed fingers at Pakistan for the April 22 attack on tourists in Pahalgam.
The moderation of content on social media by platforms merits notice at a time when misinformation and disinformation are rife, with consequences of real-world harm, especially for minorities. Three key concerns stand out.
First, platforms are increasingly shifting their moderation priorities, often allowing hate speech to remain online. Meta, for instance, has followed X’s lead by adopting a policy of not moderating misinformation in the name of “free speech”, despite the potential for serious real-world harm. Amnesty International had warned in February that “Meta’s new content policies risk fueling more mass violence and genocide.” That warning may now be materializing in South Asia.
Stand With Kashmir, a grassroots movement, has documented several instances of Kashmiri students being attacked around India after the Pahalgam attack. Several Instagram stories and Facebook posts also pin the blame on the already marginalized Muslim community in India.
X is rife with hate speech and disinformation against Kashmiris in India after the attack, but posts are not being moderated despite being reported. This follows similar moderation practices when it comes to hateful and violent content relating to Israel and Palestine, with pro-Palestine content being taken down and shadow-banned disproportionately as compared to pro-Israel content, which the Human Rights Watch has called “systematic”. Last year, an experiment revealed that Meta approved political ads that incited violence before the elections in India, as reported by the Guardian. It seems that social media platforms are ignoring their own commissioned investigations into their role in fanning the genocide of the Rohingya in Myanmar, which a BSR report documented, and the recommendations included “stricter implementation of Facebook’s credible violence policy” under the community standards.
It’s important to remember that viral communal posts on WhatsApp in India prompted the platform to introduce limits on message forwarding.
Second, platform collusion with states is leading to unchecked surveillance and censorship of social media. This manifests in various ways, including compliance with local laws that violate international human rights principles. For instance, after the tensions between India and Pakistan, several Instagram and X accounts of Pakistani celebrities and YouTube channels of Pakistani journalists and media channels are blocked for viewing in India. Is this a proportionate response in line with international human rights law, and is such online censorship necessary for the platforms to institute at the behest of the state? These are questions that platforms must answer if they are to comply with the International Covenant on Civil and Political Rights, which includes the right to freedom of expression and access to information, and the United Nations Guiding Principles on Business and Human Rights, which include company responsibility to uphold human rights.
Additionally, several people in India are reportedly being questioned by law enforcement for engaging with posts of pages advocating for Kashmiris’ rights, such as Stand With Kashmir. This shows how freedom of expression is under attack through platforms and law enforcement for simply engaging with online content.
Third, US-based companies follow foreign policy imperatives of the US government when moderating social media content. This means that mentions of individuals and organisations the US government designates as dangerous are censored, and enforcement is often disproportionate and narrow. The Meta Oversight board has adjudicated that Meta’s approach to the use of the word “Shaheed” (martyr) in the Dangerous Organizations and Individuals “policy disproportionately restricts free expression.” Kashmiris and those commenting on Kashmir in 2016 faced similar censorship due to such policies, with the current escalation in tensions presenting further risk to disproportionate enforcement. A major issue that continues today is the lack of transparency regarding the list of dangerous organizations and individuals that Meta censors, as reported in this Intercept piece.
Deriving platform policy from one country’s foreign policy and its designation of groups and individuals makes these platforms not truly global. As documented previously, the platform has been silencing accounts and discussions of Kashmiri activists. This is why Palestinian and Kashmiri activists have faced disproportionate censorship: they lie on the “wrong” side of global geopolitics.
For platforms to be truly global, participatory, and respective of rights of its users, it is essential that polices regarding content moderation be equitable, moderation account for the impact dangerous incitement to violence can have on individuals, have policies that do not discriminate based on politics, and companies stand up to governments that want to censor voices and restrict access to information rather than colluding with them.
Authors
