Home

Donate

The Invisible Hand of Artificial Intelligence in Transnational Repression

Rumela Sen, Nusrat Farooq / Oct 2, 2024

Shutterstock

After her 15-year premiership in Bangladesh and her fourth straight electoral victory, the 76-year-old Sheikh Hasina, known as the Asian Iron Lady, resigned and fled the country on August 5, 2024. While the student protests leading up to this event garnered international media attention, what was also interesting was how Bangladesh, once the champion of democracy and digitalization in the Global South, became an exemplar of backsliding democracy.

In hindsight, it is clear that the government that promised equitable distribution of technologies and the creation of a Smart Bangladesh based on more transparent governance and connected citizens also engaged in electoral misconduct, blatant nepotism, and relentless targeting of the opposition, not just within the country but across its borders as well. Digital tools often harnessed to foster transparency and participation in democracy can also be co-opted and weaponized by regimes for surveillance, control, and repression, both within and beyond borders.

Regime elites suppressing dissent beyond state borders is known as transnational repression. Freedom House coined the term Digital Transnational Repression (DTR) to draw attention to the fact that regime elites increasingly use digital tools to amplify their reach across borders to monitor, intimidate, and silence dissidents. At the center of recent developments in Bangladesh and other backsliding democracies, we discuss the invisible hand of digital tools in amplifying transnational repression.

While more prominent episodes of transnational repression, like arbitrary arrests and assassinations by notorious authoritarian regimes such as China, Russia, Saudi Arabia, and Iran, caught the attention of the international community, largely overlooked are instances of digital censorship and surveillance, nationally and transnationally, by backsliding democracies like Bangladesh.

Digital transnational repression includes digital surveillance, deployment of spyware, phishing, and hacking attacks, doxxing, online harassment, and disinformation campaigns, all of which violently threaten targeted individuals, often forcing on them self-censorship and isolation from social media. Although transnational repression is not new, the advent of digital tools has introduced a paradigmatic shift by expanding its repertoire, reducing its cost, and amplifying its proliferation, and as such, it deserves special attention as an extension of weakening freedom and rights in backsliding democracies.

DTR is best understood in the broader context of the decline of democracy and the concomitant rise of popular discontent, digital tools, and the reach and impact of platforms. Hasina, for example, wanted to monitor and intimidate her critics both within and beyond the borders of Bangladesh. So, she instructed Bangladesh’s overseas diplomats to “be vigilant” against Bangladeshi dissenters abroad and attempted to misuse international law enforcement tools against them.

From the perspective of the Global North, DTR also has serious implications for their critical infrastructure, raising questions about their responsibility in safeguarding the lives and liberty of activists and dissidents seeking refuge in their jurisdiction. More broadly, DTR impacts debates on citizenship rights, immigration, freedom of speech and expression, and the technological infrastructure of digital platforms. How can host governments and digital platforms that host these dissenters safeguard their rights?

Artificial Intelligence is Changing Digital Transnational Repression

We argue that emerging technologies, particularly artificial intelligence (AI) and, subsequently, generative AI, have further transformed the landscape of transnational repression. AI unleashed a silent revolution in DTR by exponentially amplifying its reach and impact. AI-powered content moderation tools, as used by Iran, for example, make it easier for regimes to control the narrative and suppress opposition without ever resorting to costly human intervention.

Integrating AI technologies can also make spyware like Pegasus significantly more challenging for privacy and cybersecurity. The most prominent case associated with Pegasus is the murder of Jamal Khashoggi, a journalist who openly criticized the Saudi regime. Other examples include Ethiopian refugee Tadesse Kersmo, tracked by the Ethiopian government through his computer via another comparable commercial intrusion kit called FinFisher, and the abduction of Paul Rusesabagina, the real-life hero of Hotel Rwanda and critic of the Rwandan government. In theory, it is not unreasonable to expect that integrating AI into spywares like Pegasus could enable more sophisticated data filtering, personalized targeting, and even predictive analysis.

There are also examples in the broader surveillance industry where AI has been integrated into pre-existing surveillance systems, most notably in use in China’s AI-powered facial recognition technologies. Indeed, one of the most significant trends in AI-driven transnational repression is that 51 percent of advanced democracies also deploy AI surveillance systems. The Artificial Intelligence Global Surveillance (AIGS) Index shows that AI has supercharged digital transnational repression by enabling unprecedented levels of state surveillance through facial recognition, behavior analysis, and real-time data processing, as seen employed in Xinjiang to monitor and control the Uighur population.

Yet the most impactful may be Generative AI, as it further increases the speed, scale, and spread of DTR. In November 2022, the launch of ChatGPT disrupted the AI community, particularly because LLMs, such as GPT-3 by OpenAI, can now create novice content, which is not only difficult to distinguish from genuine content but also difficult to monitor given the absence of clear regulations that hold creators accountable.

The rapid creation of hundreds and thousands of fake content by generative AI amplifies surveillance, censorship, and disinformation to an unprecedented level. Repressive regimes can now easily generate convincing text, deep fake videos, and fabricated news articles and inundate social media with propaganda that drowns out and discredits dissenting voices. Tracing these pieces of content back to their creators takes enormous resources, capacity, and technological tools that targeted dissidents and activists do not generally possess.

The addition of generative AI to the toolbox of repressive governments also threatens democratic norms like never before by expanding the production of misinformation that erodes public trust in institutions, distorts the electoral process, and endangers fundamental rights and security of targeted individuals. Researchers point out that Iran, Venezuela, Russia, and China are already using generative AI to manipulate information, spreading false narratives and discrediting opposition voices, which erode the free flow of information, a cornerstone of democratic societies.

In addition, the open-source nature of generative AI has ‘democratized’ access, and their fast cross-platform spread allows individuals with minimal technical skills to launch harassment campaigns independently and discreetly of state infrastructure. In the past, such operations required extensive planning, expertise, and resources. However, the reduced cost now allows various non-state rogue actors to participate in DTR. For instance, in 2023, undercover journalists recorded a team of Israeli contractors known as ‘Team Jorge who claimed to be using sophisticated disinformation techniques, including fake social media accounts and AI-driven strategies, to influence political outcomes in 30 countries.

In Nigeria’s February 2023 election, state-affiliated groups and political parties reportedly hired social media influencers to spread false narratives about their opponents and ran troll farms to harass and discredit opposition voices online. Notably, this occurred despite Nigeria’s National Information Technology Development Agency (NITDA) having introduced the Code of Practice for Interactive Computer Service Platforms, which mandates that internet intermediaries remove unlawful content within 48 hours—potentially curbing free expression. This illustrates how laws governing digital platforms can be vaguely written and biased against citizens, complicating the protection of dissidents' rights from repressive governments, even across borders.

Thus, generative AI introduces a new layer of threat through the creation of deepfakes and synthetic media, which regimes can weaponize to discredit activists, spread misinformation, or create forged content to undermine the legitimacy of democratic movements abroad. From personalized phishing campaigns to exposing the identity of activists, AI tools can also craft highly targeted social engineering attacks by automating the production of false or manipulative narratives at a breakneck speed.

Remedial Policies: New Directions

DTR is threatening freedom globally as backsliding democracies are increasingly using such practices to exercise control over dissident expatriates. While the concerns around disinformation interfering with election outcomes in the United States deserve attention, it is also important to take into account various other ways new tech-enabled disinformation is increasingly serving authoritarian ends in the rest of the world and how these risks are likely to intensify in the coming years in view of the growing use of social media platforms.

The legal framework governing generative AI and digital harassment is significantly underdeveloped worldwide. Even in cases where the synthetic content can be traced back to its creator, the content creator may not be held accountable in the absence of laws and regulations, or potential rules need to be balanced with constitutional protections for free speech. Despite high-profile cases such as a deepfake used to mimic the voice of the current American president, Joe Biden, to discourage primary voters in New Hampshire, — accountability is unclear even in the US legal system. To address this legal void, governments worldwide need to implement regulations on AI-generated content that address the most dangerous harms, establish clear lines of accountability for creators, and promote international cooperation to standardize laws governing digital harassment.

Host countries' governments, mostly countries in the Global North, also must strengthen legal frameworks to protect dissidents, offering asylum and ensuring legal recourse for those targeted by foreign governments. Without increased training and collaboration with law enforcement and intelligence agencies to identify and disrupt foreign state-sponsored harassment, including cyber threats and intimidation campaigns, governments can not eliminate authoritarian incursions into their domestic jurisdiction.

However, domestic laws in isolation are inadequate to address the unique challenges posed by DTR. Even when there is evidence of state involvement in digital harassment, the international legal framework offers limited recourse. Sovereign immunity and geopolitical considerations often hinder accountability, allowing states to act with impunity.

Tech companies further play a critical role in countering digital transnational repression by strengthening their trust and safety, as well as cybersecurity protocols to detect and prevent attacks on dissenting voices and state-sponsored cyberattacks. They can implement more consistent and strict content moderation policies to swiftly remove harmful content, such as targeted harassment or disinformation campaigns aimed at vulnerable groups.

Most importantly, we recommend that the risks of digital transnational repression must be mitigated not just in government corridors but at the grassroots and digital rights level by civil society organizations (CSOs), which play a crucial role in defending digital rights and ensuring democratic resilience. CSOs, by educating the public, monitoring government actions, and advocating for stronger digital protections, can serve as a primary defense against authoritarian tactics that seek to undermine democratic freedoms.

Authors

Rumela Sen
Rumela Sen is currently the Director of the Masters in International Affairs program at the School of International and Public Affairs (SIPA), Columbia University, Sen received her PhD in Government from Cornell University. Her first book was published by Oxford University Press and she is currently...
Nusrat Farooq
Nusrat Farooq is an international security, and trust & safety expert. Currently, she’s advancing her expertise as an upcoming PhD student. Farooq previously worked with the Global Internet Forum to Counterterrorism, leading the evolution of its international Incident Response Protocol and respondin...

Topics