Home

Donate
Perspective

Synthetic Media in Elections and Politics in India

Prateek Waghre / Jan 14, 2026

Prateek Waghre is a fellow at Tech Policy Press.

Elise Racine & The Bigger Picture, Glitch Binary Abyss II, Licensed by CC-BY 4.0

During a session at the G20 Summit in Johannesburg in late November, Indian Prime Minister Narendra Modi called for restrictions on the use of AI in “deepfakes, crime and terror activities.” It was consistent with his warning in November 2023, when he reportedly asked OpenAI to flag deepfakes, after he saw a purported synthetic video that featured him performing a traditional Indian dance, the Garba. Notably, the video itself was not synthetic, but featured an individual who has earned the moniker of being the PM’s doppelgänger.

The Prime Minister’s stance, however, contrasted with his May 2024 response to a synthetically generated video of him performing a dance, which he described as a “delight,” and praised for its “creativity in peak poll season.” It appeared to be an instance of political one-upmanship after the Kolkata police reacted to a similar synthetic video of Mamata Banerjee, the Chief Minister of West Bengal, by threatening those who shared it on the grounds that the post was “offensive, malicious, and inciting.”‌

Mr. Modi’s selective positions on the perceived risks and harms of synthetically generated content are significant, but he is hardly the only one. Broader institutional responses from the government of India, specifically the Ministry of Electronics and Information Technology (MeitY), constitutional bodies such as the Election Commission of India (ECI), and political parties and actors across the spectrum have also fallen short. The downsides are amplified in a political environment marked by doublespeak, bad-faith actions, and ineffective interventions.

Over three days in October 2025, MeitY and ECI issued amendments for consultation and an advisory to political parties, respectively. As Amber Sinha pointed out, MeitY’s overall “hands-off” approach to AI governance is in contrast to its (attempted) “specificity” on the question of regulating synthetically generated media. The proposed amendments appear to mark a shift from its ill-conceived advisory in March 2024, which sought to require platforms to seek MeitY’s permission before publicly releasing AI tools deemed unreliable. That advisory followed comments by the then Minister of State for Electronics and Information Technology on X, that a Google Gemini response to a question about whether Mr. Modi, and other leaders such as Donald Trump, and Volodymyr Zelenskyy, were fascists violated provisions of the IT Rules, without specifying how.

Because of their scope and broad applicability, MeitY’s proposed amendments and their potential impact on the politics of information have drawn more attention than the ECI’s advisory. Commentary about the amendments has ranged from praise, suggestions for improvement, to concerns about overly broad definitions that cover more than synthetically generated content, regulatory overreach and legal viability, including whether MeitY can impose obligations on generative AI systems under the IT Rules framework, since they may not qualify as intermediaries. Other concerns include arbitrariness, such as mandating that a fixed percentage of the content be labelled or watermarked, questions about effectiveness, including whether labeling would curb circulation or render user declarations meaningless, and the risk of entrenching censorship through vague standards that encourage anticipatory compliance. Several of these concerns have also been echoed or flagged by industry groups as well.

ECI’s directions on synthetically generated media (so far)

A more complete picture of the effects such interventions may have on the politics of information in circulation also requires an understanding of the information that circulates in politics. That is where ECI’s advisory, any follow-up actions it may take, and the broader context of political discourse in the country are relevant.

The October 2025 advisory is the third such advisory issued by ECI. The first, issued in May 2024 during the course of the seven-phase 2024 General Elections held between April and June, stemmed from a petition in the Delhi High Court in which the bench asked the ECI to consider a representation by the petitioners on the risks posed by deepfakes in the electoral context. The advisory highlighted existing regulations that could apply to synthetically generated media, and specifically asked political parties not to allow their social media handles to publish or circulate videos that violate the existing framework or the Model Code of Conduct (MCC) for elections. It also called for such material to be removed within three hours of being notified, although it did not specify any particular mechanism, or define what constituted being notified, and for those responsible to be identified and warned.

The second advisory, issued in January 2025, called for labeling synthetically generated media used in campaigns by political parties. It pointed to the “increasing use of AI technologies in political campaigning,” and the need for transparency and accountability since such content has the “potential to influence voter opinion and trust.” While it referenced its May 2024 advisory, it did not reference any violations by political parties, which the Election Commission should have been aware of since they were in the public domain (examples are included in the section: Synthetically generated content and generative AI use for political messaging in India).

The October 2025 advisory, for its part, cited the “misuse of hyper-realistic synthetically generated information, including depicting political leaders…”. It modified its labeling requirements to specify, in language similar to MeitY’s proposed amendments released only two days earlier, that labels must cover “at least 10% of the visible display area.” It also required disclosure of the name of the entity that created or generated the content, either in its metadata or caption. The advisory further prohibited content that is unlawful, in broad terms, or that uses a person’s identity, voice, or likeness without their consent. It also directed political parties to keep records of all synthetically generated content for verification by the Election Commission.

Broader allegations of partisanship against India’s ECI

Many of these measures appear reasonable on a plain reading. Yet, it is necessary to consider the broader context, including questions about ECI’s perceived partisanship and how the use and prevalence of synthetically generated information in electoral politics have evolved since India’s 2024 general elections and through the end of 2025.

In recent months, ECI’s non-partisan credentials have been clouded by allegations that it has indirectly or directly favored the Bharatiya Janata Party (BJP) through hurried exercises to revise electoral rolls, known as special intensive reviews (SIRs), first in Bihar, and then across multiple poll-bound states. Critics say these exercises could disenfranchise millions of voters while raising questions about whether the Election Commission can, or is attempting to determine, citizenship. Opposition parties such as the Indian National Congress (INC) also allege that irregularities in voter rolls, colloquially referred to as 'Vote Chori' or vote theft, have favored the BJP. Others, such as investigative journalist Poonam Agarwal, have tracked repeated mismatches between the number of votes cast and those counted.

Other developments have also eroded trust. In December 2024, the Union Government, in consultation with ECI, amended rules to allow the commission to deny providing CCTV footage days after the Punjab and Haryana High Court ordered it to release such material. In June 2025, the ECI then directed that such records be destroyed within 45 days of an election. Its combative response to INC’s 'Vote Chori' allegations has further strained credibility. At the same time, controversies and conspiracy theories persist about the security (or lack thereof) of electronic voting machines and their potential to be hacked or manipulated, even as debate over their reliability continues.

Synthetically generated content and generative AI use for political messaging in India

With scores of elections scheduled for 2024, the early part of the year was marked with concerns about elections, disinformation, and the possibility of a deepfakes-powered ‘October surprise’ in many elections around the world. While that scenario did not quite materialize, synthetically generated content became part of political discourse in the form of darkfakes (realistic and negative), glowfakes (realistic and positive), foefakes (unrealistic and negative), and fanfakes (unrealistic and positive) alongside other forms of low-quality or misleading content, sometimes referred to as “slop.”

In India, fact-checking organizations AltNews and BoomLive reported that synthetically generated media did not make up a significant share of the false claims they debunked during the 2024 general elections. However, generative AI was used to automate personalized outreach, including resurrecting deceased politicians for campaigns. The persuasive impact of such uses remains unclear, but they were visible in the recent assembly elections in Bihar, and are likely to continue to be used in future elections. Parody and fanfakes were common, too.

Others, including Rasmus Nielsen, warned that political actors themselves posed a major risk. Reflecting that concern, 15 organizations and over 200 individuals endorsed an open letter in February 2024, urging political parties in India to publicly commit to not creating deceptive or manipulative synthetic media. No party did so. Instead, there were instances of misuse, including the amplification by one of their representatives of a synthetic video of a Hindi film actor praising the INC, and attempts by the head of BJP’s IT cell to dismiss genuine videos as synthetic.

Synthetic media wasn’t the only source of concern. The BJP also used conventional animation to produce blatantly Islamophobic campaign videos on Instagram and X. There is no evidence that ECI took any action to prevent the circulation of these videos or penalize the BJP. Such action would not have been unprecedented, as ECI had directed X to take down posts in April 2024, weeks before BJP’s animated videos appeared and reportedly wrote to the presidents of both the BJP and INC about violations of the MCC along gendered lines. However, as the Internet Freedom Foundation noted in a letter to the commission in June 2024, the ECI remained silent on the repeated targeting of religious minorities.

The ECI’s silence was also evident in November, 2024, when, on the even of the final phase of polling for assembly elections in the state of Maharashtra, BJP’s official account released what it claimed were leaked audio conversations implicating multiple people, including the member of a rival party, in a scam (10:58 PM, 11:00 PM, 11:02 PM, 11:04 PM ). At least three of these clips appear to be AI-generated based on analyses by BoomLive, and Logically Facts. To date, there is no public evidence that the Election Commission took action, and none of its subsequent advisories referred to the incident directly. Notably, the posts are still up on the party’s official handle despite ECI guidance to remove synthetic content within three hours of being informed about its existence. Such episodes raise questions about the commission’s willingness and ability to act in a non-partisan manner.

Even as question marks remain about the impact of synthetic media on the 2024 election, and its persuasive effects in India, the rapid-improvment of generation tools in 2025 underscores the need for vigilance. In a 2024 year-end report, AltNews referenced an increase in synthetic media after the general election (although it did not provide any numbers at the time). By late 2024, image generation tools were being used to create images targeting religious minorities, particularly Muslims.

Unfortunately, this trend continued into 2025, with the official BJP accounts repeatedly posting material demonizing Muslims and leaders from the Congress, especially through its Assam and Delhi units. Its West Bengal unit also appeared to use some AI-generated animations. Of lesser relevance given the problematic messaging, few, if any, of these posts adhered to ECI’s labelling guidelines. In other instances, individual party leaders and ministers referenced synthetic media.

Other political parties have not deployed synthetically generated media in similarly divisive ways, but they have targeted the BJP and the Prime Minister, including content criticizing his ties with the Adani Group, depicting him as being criticized by his late mother, or spreading false claims about his supposedly redesigned residence, among other examples.

A sobering prognosis

Together, they tell the sobering story that crass, negative, and divisive messaging is a significant part of the vocabulary of political discourse. This is without accounting for the role of diffuse and surrogate actors who shape narratives and set agendas in Indian politics, further muddying the waters. The actions and revealed preferences of political parties suggest that the relatively low prevalence of synthetic media in 2024 reflected technological and, to some extent, capacity limitations, rather than restraint, pointing to a clear intent gap among political actors. With 4 states and 1 union territory scheduled to hold elections in early 2026, this is particularly worrisome.

As models continue to improve, the prevalence of synthetically generated media will increase, making it increasingly difficult for most people to distinguish real from synthetic media and leading to greater desensitization. Combined with regulatory attempts that are likely to be ineffective at best, or selective and stifling in other cases, constitutional bodies that appear to lack both the capacity and will, and political actors who posture publicly, while cynically exploiting these tools to amplify already divisive rhetoric, the prospects for meaningful improvement are slim. Neither technology-based detection nor regulation will address what political actors and dysfunctional institutions seek to exploit.

Authors

Prateek Waghre
Prateek Waghre is a technologist-turned-public policy researcher in India. Most recently, Prateek was the Executive Director / Policy Director at the Internet Freedom Foundation (IFF), a digital rights organization in India. Prior to IFF, he was a technology policy researcher at The Takshashila Inst...

Related

Perspective
India’s New IT Rules on Deepfakes Threaten to Entrench Online CensorshipNovember 7, 2025

Topics