Home

AI for Activism

Daniel Calingaert, Vukasin Petrovic / Oct 14, 2024

A protest against environmental degradation. Shutterstock

Artificial intelligence poses significant threats to democracy, such as turbo-charged disinformation and surveillance, but even as such threats merit serious responses, insufficient attention is given to its positive contributions to citizen participation in politics. AI is here to stay, and it offers significant opportunities for activists in advancing democracy.

Some civil society groups are already turning to AI to forecast political developments, fact-check media content, get messages out more widely on social media, organize grassroots campaigns, circumvent censorship, and protect against cyber-attacks. Pro-democracy advocates should learn from these pioneers and better integrate AI into their strategy. As authoritarian rulers grow increasingly sophisticated in using AI for nefarious ends, pro-democracy activists have to up their game.

Get Smarter

Machine learning models predict election outcomes, closure of civic spaces, foreign influence in democratic processes, and changes in social structures and relationships. Civil society leaders can use these forecasts to anticipate pivotal challenges and opportunities and plan accordingly.

AI tools track and analyze media coverage. They identify trends in social media, detect flows of misinformation, and assess how media narratives impact public opinions and emotions. Debunk.eu has used such tools to detect fake news across Europe. Vox Ukraine follows how Russian propaganda spreads and influences views about the war in Ukraine.

In the United States, the FactStream app checks facts in real time for political debates (it pulls from a database of previously fact-checked claims). It lets users instantly verify claims made during live broadcasts.

Mobilize Support

Civil society activists turn to AI to broaden their reach on social media. Machine learning models predict the views of different audiences, identify key influencers, track referrals, and compare performance across social media accounts. Activists can use AI to tailor their advocacy to different audience segments, understand better how public content spreads on social media, analyze their competitors’ strategies, and assess the reactions of their users.

Close to half of all internet traffic came from bots in 2022, and this share is expected to grow substantially with the proliferation of AI. Several social movements have used bots to drive conversations in social media and gain coverage from mainstream media. Bots appear to encourage action among supporters but have far less impact in changing opinions.

Activists can use AI as commercial marketers do to expand their reach on social media, see what sentiments and emojis their posts elicit, and measure how effective their opponents are. They can test in real time which messages resonate and which need improvement.

Advocacy groups, including Human Rights Watch and Greenpeace, use AI-powered platforms to build engagement, for example to segment their supporters by location and see who opens their emails, and to organize grassroots advocacy campaigns. These platforms help to mobilize grasstop advocates—to identify the supporters most interested in getting involved, send these supporters advocacy training videos, provide instructions for coordinated action, collect reports on meetings grassroots advocates held with policymakers or local communities, and facilitate private social media groups where grasstop advocates can update each other—and brag— about the work they’ve done.

Mitigate Repression

When governments block online content, AI tools like Geneva help to circumvent the censorship. Activists in repressive environments use Geneva to identify blocked internet protocol (IP) addresses, automatically reroute virtual private network (VPN) channels, and thus ensure mostly uninterrupted service. Similarly, long-standing tools such as Psiphon, Tor, and Lantern incorporate machine learning to optimize traffic routing to let users get around online censorship.

Pro-democracy activists are frequently targets of authoritarian government surveillance and cyber-attack. They can turn to AI for protection. AI can keep them up-to-date on emerging cyber threats—on malware and methods to steal their data or disrupt their online activities—and can detect anomalies in network traffic, which may signal a cyber-attack.

Moreover, AI can predict potential threats before they cause harm, automate responses to common intrusions, and recommend ways to mitigate complex attacks. Activists thus apply AI to strengthen their security proactively, respond more quickly to cyber-attacks, and limit the damage. Since repressive regimes employ AI to go after their critics, AI is critical for pro-democracy activists to defend themselves.

Use AI Wisely and Ethically

There are plenty more uses of AI. It serves, for example, to improve business processes and assist with volunteer recruitment, fundraising, legal analysis, documentation of human rights violations, and individualized learning.

Civil society organizations can turn to AI to make their operations more efficient, for example to begin literature reviews, produce first drafts of job descriptions, or edit proposals. AI can support their initiatives in significant ways, but it cannot effectively substitute for original thought, decision making, or human contact.

AI should only augment, not replace, direct engagement with constituents. Civil society groups should continue to solicit input and feedback from the communities they serve. They should never rely on AI alone to analyze political and social contexts, assess public attitudes and needs, design activities, or generate campaign messages or content. To remain effective and uphold ethical standards, activists should keep the needs and aspirations of their constituents at the center of all that they do. They should stay committed to human-centered design.

Civil society groups should be careful to choose and operate AI systems ethically, particularly to protect private data and to address bias. They should maintain robust internal systems for data protection to safeguard the privacy of their constituents, and should conduct due diligence on the security systems of AI tools they use, just as they check the ethical sourcing of other products and services they procure. In addition, they should assume responsibility for privacy protection of any data they share, so that data they feed into the AI models of donors or partners don’t inadvertently disclose private information.

To the extent feasible, civil society organizations should only collect private data that individuals explicitly consent to give them. Such a high standard of data privacy is, however, cumbersome and costly (as evident in requirements to comply with the European Union’s General Data Protection Regulation) and may be less realistic for activist groups in many parts of the world–but best efforts are necessary everywhere.

Bias often creeps into datasets used for AI models that predict political or social trends or provide insights on what people think or want. Civil society organizations should identify biases in their constituents datasets, which may skew toward males, urban residents, or particular ethnic or religious groups, and adjust the datasets to better represent the entire population they aim to reach. Moreover, use of American or European-made tools in much of the world may introduce Western biases, which civil society organizations should address.

Turn the Tide

As generative AI transforms politics and society, activists should take greater initiative to use it for advancing democracy. They should press technology companies to mitigate the risks posed by machine learning and to design new products that will aid democratic activism. And they should call on democratic governments to set and enforce rigorous standards for the responsible use of AI.

Above all, activists should integrate AI into their own strategies and operations. They should tap the great potential AI offers to bolster pro-democracy efforts.

Authors

Daniel Calingaert
Daniel Calingaert is Managing Director of the Open Society University Network and Dean for Global Programs at Bard College. He previously served as Executive Vice President of Freedom House, oversaw its civil society and media programs worldwide, and launched its global internet freedom program. He ...
Vukasin Petrovic
Vukasin Petrovic is Vice President at DT Institute, where has been working at the cross section of human rights, media and technology. He previously served as Senior Director for Strategy at Freedom House, and oversaw a global portfolio of human rights programs.

Topics