Home

The Rise of Artificial History

Craig Forman, Jamie Neikrie / May 15, 2024

Luke Conroy and Anne Fehres & AI4Media / Better Images of AI / Models Built From Fossils / CC-BY 4.0

Since their launch, generative artificial intelligence (AI) tools like OpenAI’s ChatGPT, Stability AI’s Stable Diffusion, and Midjourney have sparked multiple waves of panic and excitement. News coverage has rightfully focused on how these tools are making it easier and cheaper to produce high-quality synthetic media — including images, videos, audio, and more — distribute it at scale and across languages, and target it to a hyperlocal level.

When misused, these tools have the ability to dramatically undermine reality. Technologists and scholars are already using the phrase “Liar’s Dividend,” the inverse ability for political actors to deny real content as a deepfake to avoid accountability. Now, an even deeper and more dangerous erosion of the truth is emerging. We call this “artificial history” — the ability for generative AI to create highly credible but entirely fictional synthetic history.

For instance, a YouTube page from a user called Epentibi has close to 30,000 subscribers. The account is a fanpage of sorts, where Epentibi uses generative AI to create fake documentaries, news reels, and other clips that connect to “The New Order: Last Days of Europe,” a video game set in an alternate timeline in which Nazi Germany wins World War II.

Epentibi is talented and the clips are well-made, complete with maps, graphics, and highly produced “Nightly News”-style videos voiced by familiar and reassuring figures like Walter Cronkite and his contemporaries at European broadcasters. These segments are largely indistinguishable from actual broadcasts of the time.

In another example, TikTok account @what.if_ai creates videos of alternate realities that flip the history of colonization on the colonizers. This account has more than 88,000 followers with videos that have generated over 2.2 million likes. The impetus for this content is well intentioned. As Jerald Marin, the creator behind @what.if_ai, told Rest of World, “My viewers come from all around the world. Some live in ex-colonies like Somalia, India, and Ireland, and often they want to visualize a different history….I hope to contribute to this ongoing conversation about the legacy of colonialism and the possibilities for a more just future.”

But then the other shoe drops. “Now someone like me…can create content that gets millions of views just from my own imagination,” Marin said. “Anyone can do it.”

Of course, historical hoaxes have been around since history began, featuring such works of imagination as “Piltdown Man” — which alleged a missing link between man and ape — or novelists inventing a Nazi victory in WWII.

What differs now, in ways both exciting and terrifying, is the powerful verisimilitude of these new generative AI tools, and the way that social media platforms give these synthetic artifacts a global reach at the click of a button, with little distinction between truth and fiction. In the right hands, these tools can enhance human creativity and ingenuity. In the wrong hands, they can empower nefarious actors to distort the truth in an effort to confuse, radicalize, or recruit viewers.

Let’s look, for example, at the 2022 Russian attack on Ukraine. Ahead of this incursion — as well as the previous 2014 invasion of Crimea — Russian state actors unleashed an information warfare campaign that portrayed Russia as valiant heroes. Russian president Vladimir Putin called Ukraine a far-right extremist state, and claimed that the invasion was an attempt to “denazify” the country. Russian soldiers and citizens were told that they would be welcomed in Ukraine as heroes, liberating Ukraine from its Nazi leaders and the Western forces behind them. An army of state-owned or backed media, paid internet trolls, and bot accounts boosted these messages across social media, with demonstrable spikes in the days before each invasion. This campaign, which had no actual evidence, relied largely on Russia’s ability to manufacture conspiracy theories that played on long-standing ethnic and regional conflicts. 

Now, imagine Russian claims about a Ukrainian Nazi state came complete with a web of synthetic artifacts substantiating this fictional history; retroactive points of evidence spun out of fake first-hand accounts, on-the-ground testimony, or deepfake trusted messengers. If you can retroactively rewrite the narrative of where we come from, you can make a different argument about where we should be going. In the past, creating forgeries to challenge historical narratives took time, and the difficulty in distributing these falsehoods through education or the media made it harder to accomplish the erasure or amplification of certain perspectives. Generative AI unlocks the ability to create artificial history at a rapid pace, and social media permits its global distribution in an instant.

The legal and regulatory challenges posed by generative AI are very real today, and require no speculation (despite what some effective altruists would tell you about Terminator-esque doomsday scenarios). The number of deepfakes online increased tenfold from 2022 to 2023. Here in the US, AI tools have been used to create fake images of former President Trump being arrested and to direct robocalls impersonating President Biden to 20,000 New Hampshire residents. Outside of the US, deepfake videos disrupted the Slovakian presidential election, with the goal of swinging it to a more pro-Moscow candidate.

It is also true that the solutions needed to combat these threats are similar to those that will help guard against artificial history. Tools that allow policymakers, researchers, and users to verify the context and history of digital media (such as the C2PA standards) are crucial and should be tested at scale. As highlighted by a recent report by the Center for News, Technology & Innovation, social media companies and news media organizations must also ensure that their internal policies apply to all manipulated media with malicious intent, including “shallowfakes” and “cheap fakes” that do not require advanced technological tools 

While internal platform changes are needed, greater oversight and regulations must be placed on the social media platforms that consistently spread false information and deepfakes, but so far have largely responded to these threats by laying off crucial integrity teams and rolling back election-misinformation policies. The onus shouldn’t be on users, but public awareness, media literacy and civic education programs can also help citizens discern accurate information from false information (including how to use new content provenance tools), particularly during highly charged elections.

While AI tools have been around for much longer than ChatGPT, we are closer to the start of the AI conversation than the end. If these tools allow us to imagine positive examples of how they can enhance creativity, they also force us to grapple with the possible ways they can be exploited in the future. One of those ways will be to rewrite the past.

Authors

Craig Forman
Craig Forman is a former foreign correspondent and media executive who served as chief executive officer of The McClatchy Company, the second largest local-news publisher in the United States. Today, he is the Executive Chairman of the Center for News, Technology & Innovation, a global policy resear...
Jamie Neikrie
Jamie Neikrie is the Legislative Manager for the Council for Responsible Social Media (CRSM) and has been with Issue One since 2021. A distinguished professional in legislative strategy and advocacy, Jamie leads efforts to implement meaningful reforms on Capitol Hill, focusing on advancing privacy p...

Topics