Home

Donate

India’s Experiments With AI in the 2024 Elections: The Good, The Bad & The In-between

Nishtha Gupta, Netheena Mathews / Sep 25, 2024

In this “super year for elections” worldwide, the potential impact of generative AI on democracy has been a major concern. The 2024 Indian general elections served as the testing ground for campaign uses of AI technologies. Indian political parties were projected to funnel a reported $50 million into AI-generated content ahead of the polls, with products including deepfakes of deceased political figures and manipulated media portraying celebrities in politically charged scenarios. While the infusion of AI in electoral campaigning did not bring the catastrophic outcomes some feared, its integration amplified existing trends in traditional campaigning, enhancing the creation of persuasive and satirical narratives and, at times, the spread of deceptions.

All of this experimentation took place against the backdrop of a pivotal election. Prime Minister Narendra Modi sought to consolidate his power in a third consecutive term, and his Bharatiya Janata Party (BJP) employed divisive tactics and inflammatory rhetoric to do so. In an information ecosystem already rife with misinformation, experts and even Prime Minister Modi himself expressed concerns about the misuse of generative AI to influence voter perceptions.

Ultimately, both the BJP and other political parties used AI-generated content to attempt to sway voters. Not all these uses were concerning. In a recent report from the Centre for the Study of Democratic Institutions (CSDI) at the University of British Columbia, our team identified how generative AI technologies can both bolster democratic processes and pose significant risks to them. These dual aspects were apparent in the Indian election.

Beneficial Uses of Generative AI

Generative AI was employed to inform citizens by improving their access to high-quality information, making it more accessible to a broader audience. One notable example was the use of the AI-powered translation tool Bhashini, which allowed Prime Minister Modi to deliver speeches in multiple Indian languages, reaching a wider audience across non-Hindi-speaking regions.

The technology was also used to improve representation by facilitating more personalized and targeted political messaging during the 2024 elections. AI-powered robocalls were used in efforts to deliver hyper-personalized messages to voters. AI-driven chatbots and voice assistants improved communication between candidates and voters. BJP campaign volunteers in Rajasthan received personalized WhatsApp videos from a party leader who addressed each volunteer by name using voice-cloning and lip-matching software to deliver the party’s message. In yet another innovative use of AI, Congress politician Shashi Tharoor sat in an interview with his AI avatar at a literature festival in Kerala, uniquely engaging voters.

Harmful Uses of Generative AI

However, the darker side of AI use emerged as well. Generative AI systems were employed to empower deception by spreading misinformation and amplifying divisive narratives. A notable instance involved manipulated videos featuring Bollywood actors Ranveer Singh and Aamir Khan criticizing Prime Minister Modi while endorsing the Congress party. These videos aimed to leverage the influence of celebrities to damage the reputation of political rivals. Elsewhere, an AI-generated clip falsely depicting Congress leader Rahul Gandhi as being sworn in as India’s prime minister circulated on social media, even though the election was still ongoing. The video, which used AI voice cloning and misleading visuals, was designed to deceive voters by creating the illusion that Gandhi had already won.

Generative AI can contribute significantly to polluting the information environment by enabling the rapid creation and dissemination of low-quality or harmful content. For instance, OpenAI-powered chatbots like Strategy Boffins India Poll Predictor 2024 and Electoral Bonds India were found to provide incorrect information. Beyond India, a study by European nonprofits found that Microsoft’s AI chatbot CoPilot returned wrong answers one out of three times when responding to election-related queries. During the 2024 Indian elections, AI-powered systems also allowed platforms like Meta to approve political ads that incited violence against Muslims, exacerbating the spread of divisive and inflammatory content. This not only made it more difficult for voters to access reliable information but also escalated tensions by flooding the digital space with incendiary material designed to manipulate public sentiment.

Another concerning use of generative AI in the 2024 elections was its potential to enable targeted harassment of political figures and candidates. Deepfake intimate content posed a significant concern, especially since women in politics have been targeted around the world. In India, this insidious tactic has also been weaponized against women journalists, amplifying fears for the 2024 elections. During the 2024 elections, an AI expert revealed that he received numerous requests from politicians to create unethical deepfakes. These included fabricating audio clips of opponents making gaffes, superimposing faces onto explicit images, and even producing fake videos of their own candidates to discredit any real damaging footage that might emerge. Although he declined these requests, the prevalence of such demands highlights the potential use of AI-driven harassment.

Beyond the Binary

However, not all uses of generative AI in the 2024 Indian elections fit neatly into the beneficial or harmful categories. One use case is the resurrection of erstwhile political rivals like Tamil Nadu's Jayalalithaa and M. Karunanidhi in deepfake videos supporting current candidates and invoking nostalgia and loyalty among voters. The Communist Party of India-Marxist (CPI-M) used AI to help aging party veteran Buddhadeb Bhattacharya reach out to voters. These uses raise ethical concerns as they exploit the likenesses of elderly or deceased individuals without their consent, potentially manipulating the emotional responses of the electorate.

Political parties and political meme pages turned to AI for trolling and satire, rather than outright misinformation or deception, further blurring the lines of acceptable use in electioneering. For instance, the Congress produced a satirical video where they superimposed Prime Minister Modi’s face onto a singer in an existing music video, labeling him as a “chor” (thief) to emphasize accusations of corruption.

In another example, an AI-generated video of Prime Minister Modi dancing to Lil Yachty’s hit song “Poland” went viral. Prime Minister Modi even humorously responded to the video, tweeting, “I can’t dance well in real life, but thanks to AI, I seem to be getting better!” While these manipulations were recognizable as satire, they highlighted parties’ ongoing experimentation with AI and the increasingly blurry boundaries surrounding its use on social media.

Amplifier, Not Revolutionizer

Despite concerns about AI’s potential to disrupt the electoral process in India, the reality of the 2024 elections demonstrated that most misinformation and hate speech relied on existing, less advanced technologies. Cheapfakes—simple media edits, mislabeling, and out-of-context footage—were the primary tools used to spread false information. Google’s Project Shakti, which monitored and fact-checked election-related content, found that only 2% of the stories they reviewed were related to deepfakes or AI-generated content.

Additionally, a study in Uttar Pradesh, covering 500 mobile internet users and analyzing 1,858 viral WhatsApp messages, revealed that just 1% contained AI-generated content. This suggests that the danger of generative AI in spreading misinformation is currently low in India, largely because most misinformation is disseminated in local languages where large language models lack proficiency. In the recent elections, a significant portion of AI-generated content was more focused on satire than on misinformation, using the technology to emphasize the absurdity rather than to mislead.

Just as the 2014 elections were dubbed the “Twitter election” and the 2019 elections the “WhatsApp election,” the 2024 elections saw AI technologies integrated into the political landscape. However, AI mainly enhanced the reach and efficiency of established communication strategies, rather than creating entirely new methods of engagement. These observations align with the CSDI report’s analysis that the near-term implications of generative AI for democracy are likely to be relatively modest exacerbations of other longstanding threats. However, this doesn’t mean that policymakers and stakeholders should be complacent about the threat of generative AI.

Insufficient Regulatory & Accountability Mechanisms

The enforcement of content moderation regulations by social media platforms in India leaves much to be desired, with companies that own these platforms accused of profiting from the proliferation of hate speech. Research by The London Story and India Civil Watch International revealed that Meta allowed political ads with Islamophobic hate speech, Hindu supremacist narratives, and calls for violence. Similarly, YouTube failed a content regulation test on ads containing misinformation and inflammatory material.

Though the Information Technology Act, 2000, generally governs online platforms, the Election Commission of India (ECI) specifically oversees communications during elections. In the recent elections, however, the ECI struggled to regulate social media and messaging platforms. This is partly because the code of ethics imposed on platforms is voluntary. Even though the ECI issued a statement to political parties warning them against the use of AI to create deepfakes, the election management body largely depended on tech companies and platforms to self-regulate. The lack of effective regulations also means that violations often do not result in serious consequences. For instance, while both Ranveer Singh and Aamir Khan filed cases over the deepfake videos depicting them, there has been no official update on their complaints.

While India did not plan to legislate AI regulation initially, this stance changed following a controversy in early 2024, when Google's AI-powered Gemini chatbot responded to a query about whether Prime Minister Modi's policies are "fascist." In response to the incident, the government issued a directive in March requiring tech companies to seek permission before launching "unreliable" or "under-tested" AI models. However, this directive was retracted two weeks later over concerns about stifling innovation. Currently, India has no laws directly governing AI, but the surge in deepfakes has increased the demand for regulation.

In July 2024, media reports suggested the Ministry of Electronics and IT is drafting a new AI-focused law, requiring social media platforms to include labels on AI-generated content. However, the law would avoid penal consequences for violations to encourage innovation, a report said. Moreover, the ministry is exploring frameworks to ensure AI systems are trained in Indian languages and context-specific content, especially following the Gemini incident.

Key Takeaways & Recommendations

Effective accountability measures can deter actors from pursuing harmful uses of generative AI, incentivize institutions to take steps to minimize risks and signal to the public that norms for safeguarding democratic elections are being upheld. Political parties and consultants are already using advanced machine learning to analyze voters' data for targeted online political advertising. Currently, these data-driven tactics are more widespread and effective than the relatively new and limited use of AI-generated media designed to mislead voters.

While generative AI has not yet fundamentally increased the risks of voter deception or targeted harassment of politicians, its growing accessibility and sophistication will necessitate stricter regulation. Looking forward, it is imperative to establish robust and non-partisan regulatory frameworks to govern the use of AI in elections. This includes labeling of AI-generated campaign content posted by political parties, candidates, and other political organizations to ensure transparency and accountability. Moreover, tech companies must be held accountable for their role in amplifying election-related disinformation and harassment. Collaborating with independent fact-checkers and regulatory bodies, these platforms need to enforce measures to reduce the spread of false information and the misuse of AI technologies. Additionally, enhancing digital literacy among voters can help equip them to critically evaluate the content they encounter.

India, with its vast electorate and diverse political landscape, could serve as a critical testing ground for experimenting with and refining appropriate AI policies. By leading the way with comprehensive regulations and ethical AI use, India has the potential to set a global precedent, offering lessons for other democracies grappling with the same challenges.

Authors

Nishtha Gupta
Nishtha Gupta is a research assistant at the Centre for the Study of Democratic Institutions (CSDI) and a Master of Public Policy and Global Affairs student at the University of British Columbia. With a background in journalism at one of India’s top news organizations, she brings a deep understandin...
Netheena Mathews
Netheena Mathews is a graduate research assistant at the Centre for the Study of Democratic Institutions (CSDI) and a Master of Public Policy and Global Affairs student at the University of British Columbia. With a decade's experience in public policy analysis, impact consulting, and journalism, she...

Topics