Home

The Era of AI-Generated Election Campaigning is Underway in India

Vandinika Shukla / May 24, 2024

Vandinika Shukla is a fellow at Tech Policy Press.

Chennai, India - April 9, 2024 - BJP supporters at a roadshow for Prime Minister Narendra Modi canvassing votes for the Chennai Lok Sabha BJP candidates. Srinivasa Krishnan/Shutterstock

With a six-week-long election cycle, nearly 1 billion voters, and more than 2,600 political parties, India’s general elections are under the world’s gaze. While Prime Minister Narendra Modi is the showstopper of the world’s largest democratic exercise, a 31-year old college dropout and former student politician in the back lanes of a small town called Pushkar in Rajasthan is grabbing headlines in The Washington Post, New York Times, BBC, and Rest of World.

“During the COVID-19 lockdown I told myself I would learn a new skill every 30 days, and that’s when I taught myself how to code”, Mr. Divyendra Singh Jadoun told me as he recalled his journey from starting a viral Instagram account, “The Indian Deepfaker,” to running his 6-month-old, 9-person AI company called Polymath Solutions. “Earlier it used to take me seven days with facial data and computational power to create AI-general content. Now I just need a single image and a target image to make a face swap video in less than 3 minutes without any coding skills, for less than $5.” Mr. Jadoun’s success is emblematic of an emerging, bottom-up entrepreneurial ecosystem of AI-generated text, video and audio developers in India.

Supply meets demand

That ecosystem is currently benefiting from the election cycle. Political parties in India are estimated to spend over $50 million in AI-generated election campaign material this year, and young synthetic media companies like Polymath Solutions are meeting that demand. The dynamics of the emerging market for such material in India confirm that deepfakes and other forms of synthetic media are no longer just a weapon to flood the market with false information, but rather mark a new era of political campaigning that utilizes an already established communications infrastructure for targeted, relational, emotionally charged voter outreach via social media, messaging apps, and other digital channels.

Across the country, there are numerous examples of AI-generated campaign videos, personalized audio messages in different Indian languages, and automated calls to voters in a candidate’s voice. In the most high-profile case so far, Home Minister Amit Shah alleged that Revanth Reddy, the Congress’ recently elected chief minister in Telangana, used AI to alter a video to misrepresent Amit Shah’s views on affirmative action quotas. This was followed by two police arrests in connection with the doctored video, one each from the opposition Aam Aadmi Party and the Congress party.

Prime Minister Modi himself has been an early adopter, with the government created AI tool Bhashini, an AI driven language translation system, that translated his voice from Hindi to Tamil in real time. Recently, Shashi Tharoor, a minister from the Congress party conducted an interview with his AI avatar.

In January, M. Karunanidhi, the former Chief Minister of the southern state of Tamil Nadu, appeared in an AI video at his party’s youth wing conference and then a book launch of his friend and fellow politician’s memoir. He wore his signature look with a yellow scarf, white shirt, dark glasses and his familiar stance with his head slightly bent. But Karunanidhi, who held his position in government for two decades, died in 2018. In February, the All-India Anna Dravidian Progressive Federation party’s official X handle posted a minute long AI-generated audio clip of J Jayalalithaa, the iconic superstar of Tamil politics colloquially called “Amma” or “Mother”, seeking support for her successor contesting in the elections. J Jayalalithaa died in 2016.

“The film industry contacted me to create content for the book launch, it was supposed to be a private event, but it quickly became viral and political,” Senthil Nayagam, who made the deepfake Karunanidhi video, told me. Mr. Nayagam is the founder of Muonium AI – another young synthetic media company that got its start with experiments in AI-generated content, including face-swaps and trending videos adjacent to the film industry.

“We experimented with what fans would like and created a version of the famous actor Rajnikanth’s song in the famous singer S P Balasubramaniam’s voice,” explained Nayagam. Balasubramanium had been Rajnikanth’s playback singer for years, but he died during the COVID-19 pandemic. “We quickly began to get requests from individuals to resurrect family members they were mourning. Resurrecting people with AI is possible and believable.” Nayagam explained that his work in the South India film industry offered a model to political parties. While this particular use of AI for Balasubramanyam’s voice is under a legal dispute, the video was certain to go viral given the celebrity status Rajnikanth enjoys in the South.

Meanwhile, voters are now receiving calls from supposed local representatives about the most concerning issues in their area – except the leader on the other end of the phone never made the call. “You can get a call from anyone you want. These are two-way conversational agents and not just IVR [Interactive Voice Response] which are a hundred times more dangerous than deepfakes. But if we are building this, anyone can,” Mr. Jadoun warned me as he showed me his sample AI phone call to a voter.

A sample provided by Polymath Solutions with AI avatars addressing Indian voters by name, in the language they speak, and asking their constituency what challenges they are experiencing.

Two things are unique about the high volume of hyper realistic AI-generated content in India. Firstly, the content is designed to appeal to emotions, is largely translated to regional languages, and tends to tug at the voters’ relational bonds with their leaders, especially with resurrected politicians who enjoyed superstardom while alive and veneration upon passing. Secondly, this content is distributed on unmoderated and unscrutinized platforms, often packaged by hyper-local content aggregators. For example, Moj, a short videos app launched in 2020, has over 300 million videos created in regional languages with over 50 billion views. It enjoyed a recent 71% spike in viewership in Tamil, followed by Telegu and Bhojpuri. Public App, a location-based social network valued at over $250 million as of 2021, is available in several major Indian languages (including Hindi, Bengali, Punjabi, Telugu, Tamil, Kannada, Malayalam, Odia, Assamese, Gujarati and Marathi). Public App allows shop owners and other local businesses to drive e-commerce and hire local talent but also political leaders, government authorities and media houses to reach local audiences. Meanwhile, BJP’s structured and deeply embedded volunteer workforce has powered an unparalleled WhatsApp group infrastructure; their sophisticated messaging distribution dominates the political narrative while remaining largely out of plain sight.

Sample video provided by Muonium AI with a demo of Prime Minister Modi’s famous monthly Radio show “Mann Ki Baat” loosely translating to “From the heart,” translated into regional languages.

Emerging technology, emerging rules and regulations

The race to get content to the voter faster and with more relevance, coupled with a permissive regulatory environment, has meant ethics is now in the hands of small enterprises meeting the demands of electioneering in 2024. Mr. Jadoun and Mr. Nayagam joined another A.I. startup to create what they title an “Ethical A.I. coalition manifesto” pledging to protect data privacy, uphold election integrity, and prevent creation or distribution of harmful content. Other start-ups may not have the same proclivity for labeled or ethically produced content.

Prompted by the doctored video of the Home Minister in early May, the Election Commission of India issued a directive to all political parties regarding the responsible and ethical use of social media, specifically cautioning against the spread of misinformation and deepfakes. The directive specifically goes on to state that parties must remove any deepfake audio/video content within three hours of receiving notice, although what constitutes “notice” is still unclear.

Ahead of the elections, the Indian government also tried its hand at AI regulation with an AI advisory in March 2024, asking platforms to request “explicit permission” of the Ministry of Electronics and Information Technology (MeitY) before deploying any “unreliable Artificial Intelligence model(s)/LLM/Generative AI, software(s) or algorithm(s)” for “users on the Indian Internet.” It also asked platforms to ensure their systems are free from bias or discrimination, that they do not threaten the integrity of the electoral process, and that they provide for the labeling of all synthetically created media and text with unique identifiers or metadata. But the AI advisory is under scrutiny regarding its legal validity (as an advisory it has no statutory force), lack of clarity in scope and effect, and its impact on stifling start-up culture.

Fact checking units are trying to fill the void. “At Vishvas News, we focus on the habitual offenders among the bad actors and maintain data that helps us track misinformation trends and patterns, as an early warning system,” Jatin Gandhi, the Executive Editor of Jagran New Media and head of fact-checking at Vishvas News shared with me. Gandhi is also a member of the newly formed Misinformation Combat Alliance (MCA), which includes 12 fact-checking organizations – all International Fact-Checking Network (IFCN) verified signatories – three media companies, and a civic tech organization. More recently, the MCA was supported by the Google News Initiative, bringing together fact-checkers and legacy publishers to counter election misinformation and amplify debunks. The Deepfakes Analysis Unit, an initiative of the MCA seed funded by Meta, launched a WhatsApp helpline for multilingual, real-time conversations.

But as Gandhi sees it, “fact-checking is like critical medical care in a health emergency,” and media literacy will be the pathway to a healthier information diet in India. For elections, it continues to be a game of whack-a-mole, especially when the content has already been circulated many times before it is debunked. Moreover, despite AI-companies like The Indian Deepfaker insisting on using “AI-generated” disclaimers, the watermarks don’t have their intended effect because, firstly, a common understanding of content credentials is yet to reach the masses, and secondly, inoculation does little to prevent the swift distribution of entertaining, emotionally charged content received from people you trust.

An election as big as India’s operates as its own economy. The Center for Media Studies, an Indian non-profit tracking election expenditure, expects spending to reach $16 billion this year. AI-startups are riding this wave, but those I spoke to say they do not consider political communications to be their primary source of income in the long run. So, after this vast but largely uncontested election is over, who will be their primary clients? The use-cases and distribution mechanisms for AI-generated content modeled during the Indian elections are a signal of what’s to come. When all eyes are not on the elections, and the ruling party is steering the country with a new arsenal of tools, the same relational and emotionally charged content can be deployed for more than entertainment or to earn votes. This new style of political campaigning will bleed into how Indians consume information about their day-to-day relationship with government representatives – from public services, disaster relief, healthcare, food subsidies, government policies and civic action.

The Indian elections are a signal for how trusted relationships will be forged and destroyed in an era of hyper-realistic, hyper-personalized content, customized in regional languages, and distributed en masse.

This article had been updated to clarify that the Deepfakes Analysis Unit operates independently as an initiative of the Misinformation Combat Alliance (MCA).

Authors

Vandinika Shukla
Vandinika Shukla is a human rights and technology policy specialist. She has designed national gender policies at UN Women, built electoral campaign AI products to represent BIPOC voices in policymaking at MIT Media Lab, and launched a community organizing portfolio at Harvard. Vandinika writes on t...

Topics