Home

AI Propaganda Will Be Effective and Easily Accessible

Max Rizzuto / Apr 12, 2023

The capacity of AI tools has grown in tandem with the potential threats they pose to our information space, says Max Rizzuto, a research associate at the Atlantic Council’s Digital Forensic Research Lab (DFRLab).

In a Truth Social post on March 18, 2023, former US President Donald Trump wrote that he expected to be arrested the following Tuesday. A week of frantic chatter in mainstream and social media ensued, anticipating what might happen next. During this information void, AI-generated images envisioning the former president’s arrest circulated widely, titillating Trump’s opponents while enraging many of his supporters. On March 31, a Manhattan grand jury indicted Trump over a hush payment made just before the 2016 election, and he was arraigned before the court on Tuesday, April 4.

Before the indictment, one set of AI-generated images, created using Midjourney and shared on Twitter by Bellingcat open-source researcher Eliot Higgins on March 20, accrued millions of views, spreading to other social platforms and multiple news websites. The circumstances of the images’ extraordinary virality and the contentious political and cultural nature of their subject matter could serve as a bellwether for the future of malign propaganda applications involving AI-generated images and other forms of synthetic media.

In December 2017, some of the first examples of weaponized AI-derived media surfaced in the form of AI pornography. In “deepfake” videos, AI-enabled face-swaps portrayed the likenesses of non-consenting celebrities in sexually explicit scenes. These artifacts were synthesized with publicly available datasets and run on consumer-grade hardware with open-source code; the only factor mitigating their production was the patience and time commitment required to generate convincingly realistic content.

Since then, the capacity of machine-learning tools has grown in tandem with the potential threats they pose to our information space, as the mechanisms for generating synthetic media have become more versatile and accessible.

Numerous examples of politically motivated bad actors embracing AI tools to deceive, manipulate, and erode public trust have already been documented (and possible future deceptions have been foreshadowed). The next generation of AI could dramatically expedite the creation of on-demand false evidence with minimal effort, creating deceptions that are ready for immediate distribution and conceived to incite a reaction.

The potential of next-generation AI-generated propaganda has not yet been realized due to three mitigating factors, each of which are now increasingly irrelevant: access, technical capability, and the time and effort required to generate and effectively disseminate a malign fake.

The technical complexity of AI tools and the processes required to use them was a considerable barrier in the pursuit of creating fakes for propaganda purposes. Seemingly overnight, the access hurdle has been rendered moot by the "AI as a service" model, spawned by an industry attracting $248.90 billion in private investment since 2013, in the United States alone. Entities with access to immense amounts of capital, petabytes of storage space, teams of highly skilled engineers, and unfathomable computation power have made it possible for anyone with an internet connection to manifest near-photorealistic images.

Meanwhile, the burden of collating training data no longer falls on the user. The latest generation of tools enables fabricated images to be spawned with simple text prompts. Early synthetic media was rife with illogical visual artifacts, evidence of the limitations of the complex computational processes responsible for their creation. But there has been steady progress in addressing the unintended artifacts and contextual details that disrupt the illusion, such as the setting, depth of field, vanishing point, tell-tale indicators such as distorted human hands, or words rendered as gibberish. Still, the viral spread of the synthetic Trump arrest photos proved that current limits to absolute photorealism could be secondary to people’s willingness to engage with fiction. Images don’t have to be realistic to produce an emotional response.

Lastly, producing viable fake evidence fast enough to sway a given news cycle has been challenging until recently. In the absence of on-demand AI-conjured content, high-quality fakes required both time and manual effort, limiting the number of bad actors willing and able to toil to generate synthetic media to sway public opinion. The window during which visual propaganda is most exploitative is often fleeting, yet each new generation of synthetic media tools makes ever quicker turnarounds more feasible.

The Trump images provide useful evidence that AI-generated falsehoods may thrive in information vacuums and are most pervasive when they coincide with a media cycle. The former president’s preempting of the New York district attorney’s announcement of the pending indictment left more questions than answers, and created an environment fertile for falsehoods to grow.

The Trump case reminds us that people are willing to gravitate toward compelling imagery in the absence of trusted information. However, these conclusions are not without caveats. Trump is a unique example, and the circumstances of the images and the dramatic nature of their depictions certainly contributed to their viral success.

There is still progress to be made in boosting public awareness about the unique harm derived from AI-generated propaganda. For example, several mainstream news outlets circulated the fake photos of Trump without overlaying a watermark of any kind to mitigate the potential harm that arises from the images being amplified without context – the threats these technologies pose is orders-of-magnitude greater in areas with less robust information spaces. In different information environments, such as those devoid of confidence in institutions, false evidence could have lasting repercussions before being disproven or even questioned.

Researchers debating how to handle this technology and its impact face a monumental challenge, beginning with awareness of what fabrications are achievable with current and future AI technologies.

Beyond spreading awareness, a paradigm shift in how we interface with and perceive media is crucial in building resilience in the face of potentially harmful forms of technological innovation. Photo and video, which have held the mantle of the highest authority of substantiating truth since their inception, are now in contention due to the capacity of AI spoofing.

In remarks to the Washington Post, Sam Gregory, the executive director of the human rights organization Witness, proposed that “the aim may not be to convince people that a certain event happened but to convince people that they can’t trust anything and to undermine trust in all images.” A compelling, if not somewhat daunting, premise.

In the face of the threat AI poses, and in defense of our collective understanding of truth, there may not be any room for half-measures. Progress in the fight for truth will not be easily earned. It may require a departure from the adage “I’ll believe it when I see it” to embracing an attitude of: “I can’t believe my eyes.” Not without verification, at least.

Though we have made progress in recent years and learned how to better discern between truth and falsehood, there is still more work to do. Preparing ourselves for a world dense with photoreal falsehoods will require introspection and conscious resistance to the muscle memory of our evolved modes of perception. Whether society is prepared to engage in such efforts will be put to the test in real time.

Authors

Max Rizzuto
Max Rizzuto is a research associate at the Atlantic Council’s Digital Forensic Research Lab based in Washington, DC. His work on extremism and disinformation has been cited by the Associated Press, Axios, Al Jazeera, Business Insider, and more. He holds a bachelor’s in Science, Technology, and Socie...

Topics