Dispatches on Tech and Democracy: India’s 2024 Elections #6
Amber Sinha / May 24, 2024Amber Sinha is a Tech Policy Press fellow.
This is the sixth issue of Dispatches on Tech and Democracy, examining the use of artificial intelligence (AI) and other technologies in the Indian general elections. For the last seven weeks, these dispatches have covered news and analyses about the ongoing elections in India. All but one phase of the elections have now concluded. This issue will take an in-depth look at the use of WhatsApp in electioneering in India.
In Focus: Use of WhatsApp in electioneering in India
In the last dispatch, I mentioned a new report by Rest of World that analyzed the BJP’s use of WhatsApp groups in electioneering. That report, based on a study in partnership with the Pulitzer Center’s AI Accountability Network and Princeton University’s Digital Witness Lab, analyzed activity across BJP-affiliated WhatsApp groups in Mandi, a small town in Northern India, to understand the app’s role in the BJP’s 2024 election campaign. It notes that at least 5 million WhatsApp groups are operated by the BJP in India, enabling the party to disseminate information across the country through this tightly-knit network “within 12 minutes.”
While the attention in the West has been primarily on Facebook and Twitter as disseminators of misinformation, in India (much like Brazil and China), messaging services such as WhatsApp play a dominant role. Out of India’s roughly 820 million Internet users, there are over 535 million monthly active users on WhatsApp, and the monthly average number of WhatsApp users surpassed Facebook’s numbers way back in September 2018.
In particular, WhatsApp forwards of image and video files are one of the key modes of disseminating information and news in India. As a result, a large amount of disinformation comes in the form of misleading images and videos via the application —often with a text blurb. Gratuitous videos and images appeal to people's raw emotions more than text messages. They are also amenable to being remixed with false contexts and messaging that may have little to do with the actual image or video.
Related Reading:
- Dispatches on Tech and Democracy: India’s 2024 Elections #3
- Dispatches on Tech and Democracy: India’s 2024 Elections #4
- Dispatches on Tech and Democracy: India’s 2024 Elections #5
Misinformation and extreme speech spread on social media and WhatsApp, but the user experience is markedly different on the two platforms. On a social media platform like Facebook, the content users see is mediated by its algorithms, but WhatsApp offers an altogether different experience—simply displaying all messages in reverse chronology (with the recent one first). User actions and responses on Facebook are also mediated by the design of the platform—as users are expected to leave a comment on a post, share it, or react using emojis. WhatsApp does not shape interactions in the same way. In a WhatsApp group, it is up to users as to how to engage.
Norms and practices on WhatsApp have also evolved without any personalized algorithmic training. Each group has its own shared identity and purpose, and violating those is often met with some backlash. There can also be explicit rules about what to share and not to share in groups, with group admins playing an active role in weeding out those who breach these rules and members policing content that they feel does not belong. This aspect of WhatsApp groups may be reminiscent of tightly monitored forum discussions in the first decade of the Web.
In a group with extended family members, “Good morning” messages, jokes, or entertaining forwards may be kosher, but messages against the group's political persuasion could be unwelcome. Messages and forwards sent to one group do not necessarily find easy mobility to other groups, and people are acutely conscious of which messages belong where.
It is also not easy to post the same message or forward it to all your groups or contacts, as you need to send it to each group or person. Therefore, while messages become viral on WhatsApp, managing or monitoring virality is more challenging. However, this aspect of WhatsApp makes it more useful for mobilizing small geographic groups or communities. For example, it is all too common to see WhatsApp messages used to spread rumors that prey on community prejudices and turn those local sentiments into violence.
Groups might also have their own content restrictions, but as late as 2018, WhatsApp offered no method to report abuse or flag misinformation. In September 2018, they finally appointed a single grievance officer for India whom users could contact with concerns and complaints. The grievance officer cannot be contacted via WhatsApp, and a digital signature is required to reach them over email.
In countries such as India, WhatsApp messages are among the biggest propagators of misinformation and the hardest to track. This is mainly because communication is end-to-end, on a more private level, and it becomes very difficult to trace the source of information back to any particular individual. As a result, on WhatsApp, there is much less visibility of the spread of information compared to Facebook or other platforms with more public groups and feeds, as users can only see what is happening in their groups.
Political campaigns can overcome these feature limitations by spreading content through a large and intricate network of people involved in electioneering. This is where the BJP has an edge in India. The Rest of World story notes the extent of the BJP’s WhatsApp group in Mandi to give an idea of how rigorously the party has organized its digital network across the country:
In Mandi, the BJP has a WhatsApp group for everyone. There is a hierarchy of groups organized from the national level down to state, district, sub-district, and so on — all the way to individual "booths," which represent the community of people who vote at the same polling booth. Then there are groups targeted to different demographics and interests: In Mandi, farmers can join at least two farming-focused WhatsApp groups. There are also groups for youth, doctors, ex-servicemen, traders, and intellectuals. Women have the option to join the group "Mahila Morcha," Hindi for "women’s wing." There are groups for official caste classifications and tribe classifications. Some groups are intended only for BJP workers or members; others are open to the general public. There are also BJP-linked groups that aren’t explicitly political. One such group is dedicated to keeping Mandi clean and tidy — but a BJP member is still an admin.
Even though WhatsApp was intended as a private messaging service, it is difficult to think about it today as anything other than a hotbed of group conversations. Groups on WhatsApp are all built around common interests or associations. These could be personal (extended family, friends, weddings, or holiday planning), work-related (company-wide, department, and project-related), about hobbies (cricket, cinema or quizzing), or other communities (housing complex, alumni groups, new parents). Each group is held together by its shared identity.
In an earlier report about fake news in India commissioned by the BBC, this shared identity is recognized as the key driver that makes WhatsApp groups behave like a collective. This helps in achieving homophily, or the drawing together of people in tight networks of like-mindedness. Shared identity, association, and beliefs lead to group members suffering from confirmation bias. Confirmation bias is a well-recognized tendency to process information by looking for or interpreting information consistent with one’s beliefs. This makes WhatsApp the ideal medium for mobilizing members of a group.
More often than not, WhatsApp groups will have people with similar beliefs, which will lead to greater confirmation bias. The fact that the information is shared by someone one knows also leads to acceptance without questioning.
Other Developments
There has been much talk about using deep fakes in the ongoing Indian elections without sufficient in-depth investigations or research. Nilesh Christopher and Varsha Bansal attempt to plug this gap in this excellent long-form story for Wired, which sheds light on the actors, practices, and trends in India's burgeoning deep fake industry. The article looks at some prominent companies creating synthetic video content in the elections and how they work with key political parties such as BJP and Congress. Aside from creating deep fake content with the intent of making them go viral online, the story also documents other uses of synthetic voice and video, such as AI-based voice calls for logistical coordination and campaigning, and the way Ai-based services are being integrated into BJP’s Saral App which is used for intra-party coordination.
In the last dispatch, I mentioned the Boom Live story on the inefficacy of X’s crowdsourced fact-checking program Community Notes in India. Last week, The Hindu also wrote a story about the failures of the Community Notes features to effectively flag misleading content on X’s platform posted by the ruling BJP party. The report notes that the feature seems to have stopped showing fact-checking Notes for controversial BJP content recently. This does not mean people aren't drafting community notes on this content(Notes that completely disagree with specific posts are being submitted), but these Notes are not being approved or displayed to X users. The report quotes former executives at X stating that "X rolling out the Community Notes feature in India just weeks before the election without any human moderators was expected to have major flaws and could be perceived as an action to simply boost their brand."
In a recent report, India Civil Watch International (ICWI) and Ekō document Meta’s approval of AI-manipulated political advertisements during India’s election that spread disinformation and incited religious violence. To evaluate Meta’s mechanisms for detecting and blocking political content that could prove inflammatory or harmful during India’s ongoing elections, they created several such instances of political advertisements and submitted them on Meta’s ad platform. The report claims the adverts were based on existing examples of real hate speech and disinformation prevalent in India. In all, they submitted 22 adverts in English, Hindi, Bengali, Gujarati, and Kannada to Meta, of which 14 were approved by Meta’s review mechanisms.
A new investigative report by CheckMyAds documents Google’s continued monetization of Hindu Nationalist Media site OpIndia despite its repeated violation of Google’s policies on incitement of hatred and disinformation. Wired also covered the report in an in-depth story, documenting OpIndia’s history of publishing conspiracy theories, hate speech, and disinformation and its presence on Google and other platforms such as Facebook and X.
Additional Reads
- Article 19: The Center for Democracy and Technology and 8 other organizations published a letter expressing concerns about recent actions taken by India’s central government against journalists, political opposition, and media outlets.
- A recent story on The Quint discusses the differences between political content on WhatsApp and YouTube.
Please share your feedback. Are there stories or important developments that we have missed? You can write to us at contributions@techpolicy.press with your suggestions, reflections, or critiques.