The Information Crisis That Brought India and Pakistan to the Brink
Nabiya Khan / Jun 2, 2025The recent India-Pakistan conflict following the April 22 attack in Pahalgam, Jammu and Kashmir, did not play out solely through military maneuvers or cross-border missile or drone airstrikes. A parallel battle took shape across social media platforms and mainstream news outlets. In a recent report, the Center for the Study of Organized Hate (CSOH), Inside the Misinformation and Disinformation War Between India and Pakistan, lays out in granular detail how that information war took shape. In the report, we note that the flow of propaganda was not accidental or sporadic. It was strategic, iterative, and enabled not only by social media platforms but also by mainstream newsrooms that became amplifiers rather than filters.
Our report, based on an analysis of 1,200 posts across social media platforms, found that information did not simply circulate; it metastasized. Our monitoring across X, Facebook, Instagram, and YouTube found verified accounts playing a central role in electronic warfare. On X, out of 437 posts with misinformation, only 73 posts had community notes. The volume and velocity of misinformation and disinformation were staggering: fake videos of airstrikes, AI-generated videos of political leaders conceding defeat, screenshots of nonexistent news articles, and repurposed footage from other conflicts.
The Disinformation Feedback Loop
But what matters is not just the origin of disinformation, but how it spreads and acquires legitimacy. This is where the role of the legacy media becomes central. A large share of disinformation did not remain confined to the margins of social media platforms. It was reported verbatim in India’s mainstream news media. The result is a feedback loop that validates the original misinformation and disinformation, giving it not just reach but also credibility.
We observed a pattern: a piece with misleading or false claims circulates widely on a platform like X. A journalist encounters it, not through formal channels, but via their feed or in an internal chat. It is framed, shared, and broadcast, sometimes even aired on prime time. That coverage, now framed by the authority of a newsroom, is re-posted back onto the same platforms, completing a feedback loop. A fabricated video that started as a fringe post now exists as a “reported” event.
This cycle, though not always laced with malicious intent, reflects speed-based decision-making under the pressure of rolling coverage. It shows how disinformation survives not because of its originality, but because of its ability to latch onto institutional structures that lend it credibility. The disinformation is not only shared but also repeated by those perceived as gatekeepers. And in the context of a high-stakes geopolitical conflict, that repetition has consequences. Once these narratives harden, they shape public mood, embolden military responses, and close off diplomatic exits.
This pattern was not hypothetical. It was repeatedly visible in our data. A video game clip, drawn from Arma 3, was circulated on X as Pakistani JF-17 jets “delivering justice” deep into Indian airspace. After it trended, it was shared by Pakistani journalists. On the Indian side, various news outlets used a 2023 naval drill clip, claiming the Indian Navy had attacked Karachi Port as part of the “Operation Sindoor.” Once aired, the misinformation gained a second life, legitimized by the reputations of the platforms that repeated it.
This loop, where social media forwards inform newsrooms, which then validate the original fake, was one of the most dangerous vectors of spread. In conflict settings, speed often takes precedence over accuracy. But that tradeoff is no longer benign when it actively fuels escalation. When media reporting is based on what is trending, instead of what is verified, newsrooms become agents of amplification for propaganda.
The outcome of such widespread disinformation can be seen in the vicious hate directed against the Indian foreign secretary Vikram Misri and his family. Many of those who had been buoyed by the media and social media narrative of India’s military dominance suddenly had to make sense of the ceasefire when it was declared. Misri and his daughter were made targets of this hate, which forced him to lock his account on X.
The role of verified accounts on X in amplifying disinformation is particularly important. The blue check has shifted from a tool of authentication to one of influence. Under the new model, accounts that pay for reach are algorithmically privileged. Our data showed that a substantial proportion of viral disinformation originated from verified accounts, many of which openly identified as Hindu nationalist influencers. On May 8, Hindu nationalist influencer Abhijit Iyer-Mitra (286.7k followers on X) explicitly praised users who spread unverified claims such as a coup in Pakistan, planes fleeing the country, a captured pilot, and attacks on Karachi and Lahore, asserting that these actions were part of "electronic warfare arm of the motherland" and a form of "information warfare" to advance India’s narrative.
Platform failure, in this case, was not passive but structurally ingrained into the business models of Big Tech. Community Notes, X’s most visible moderation tool, arrived too late and too inconsistently. The platform’s own design ensured that emotionally provocative, high-engagement content, including disinformation, was prioritized on users’ timelines.
Towards a New Framework for Combating Disinformation
AI-generated content introduced a new challenge during the active conflict. Traditional media verification techniques, such as reverse image searches and metadata tracing, are no longer effective against freshly generated synthetic visuals. The digital fabrications we observed were designed to be believable, for example, videos that mimicked leaders’ speech patterns, images that appeared to be on-the-ground conflict reporting, and voice-cloned statements from real political figures. The point of these fabrications was not simply to spread disinformation, but to destabilize any shared sense of what was actually happening.
Some of the most damaging content emerged from the deliberate fusion of generative AI, ideological intent, and institutional gaps. When a fabricated video showing Pakistani military leaders conceding defeat or Indian soldiers surrendering gained traction, it wasn’t just an online hoax; it became part of the psychological terrain of the conflict. In a nuclear-armed region, that is not a hypothetical risk.
In this context, the media must stop treating platforms as neutral sources and incorporate fact-checks into their editorial process. Journalists need new protocols for digital verification, particularly during crises. Editorial teams should track the provenance of visual content, require independent confirmation before airing footage, and acknowledge when materials cannot be verified. Reliance on what’s viral or trending as a proxy for truth is no longer just lazy but dangerous.
Likewise, platform accountability cannot be optional. Verification should not be a subscription feature, and must be conditional upon an account’s commitment to uphold authentic information on the platform. Synthetic media must be labeled in real time, using embedded metadata or watermarking systems. Platforms must publish conflict-specific transparency reports that include details of not only takedowns, but also what remained online and the reasons why. Algorithms that prioritize viral engagement over verified information should be reconsidered during conflict periods, where the cost of misinformation and disinformation is not merely reputational, but also geopolitical.
The India-Pakistan conflict was not the first mis/disinformation-driven crisis, and it will not be the last. But it provided a case study in how military aggression and digital propaganda now operate in tandem in modern warfare. Disinformation is not adjacent to war but a part of it. Its role is to destabilize, provoke, polarize, and make rational policy harder. That is what we saw unfold, and that is what future conflicts will look like unless regulation, journalism, and platforms evolve in step with the threat.
The convergence of military escalation and disinformation has redefined what wartime communication looks like. Journalists, influencers, AI tools, and platform algorithms are now all part of the theater.
The cost of inaction is clear. We remain in a digital space where truth is brittle, speed outpaces scrutiny, and where the amplification of misinformation and disinformation continues to bring two nuclear-armed states to the brink. The information war has changed, but our responses haven’t. Until they do, we will continue fighting our wars twice—once on the ground, and again in the cloud of manufactured reality.
Authors
