Time to Act on Harmful Deepfakes & Algorithms
Gretchen Peters / Oct 31, 2024Gretchen Peters is the Executive Director of the Alliance to Counter Crime Online.
Over the past two months, I’ve been on a dystopian journey to understand how illicit actors are weaponizing AI and deepfake technologies to defraud and predate innocent people. Our new report on the issue is out today.
I’ve watched a deep-faked Rep. Nancy Pelosi (D-CA) in bed with a deep-faked former President Donald Trump and reviewed disturbing forgeries that transformed actress Emma Watson into an underage version of herself performing sex acts. I’ve looked at “nudify” technologies that enable pedophiles and stalkers to spread deep-faked explicit images of real-life celebrities and high school girls.
I’ve learned how digital forgers are using off-the-shelf technologies to scam people in real-time, using video calls over WhatsApp and Facetime, and I’ve screened promoted content spreading on Facebook, X, TikTok, and Instagram, where digital forgeries of public figures, from Canadian Prime Minister Justin Trudeau to billionaire financier Warren Buffett, are used to hawk crypto investment scams and other fraud.
If you want the disconcerting experience of watching a bald white man impersonate the late rap legend Tupac Shakur, tune into the YouTube channel of J. Rent, a digital forger with 250,000 followers on YouTube who produces detailed instructional videos on how to use AI tools, including Google’s own Colab, to replicate the melodies, beats, lyrics, and voices of rap artists ranging from Kanye West, ASAP Rocky, Lil Wayne, and Eminem.
Cybercrime already costs the globe more than $9 trillion annually. To put that number in perspective, if cybercrime were a country, it would have the world’s third-largest GDP after the United States and China.
And with these generative AI and deepfake technologies widely available, we can expect an even bigger tsunami of fraud, exploitation, and intellectual property theft. With their capacity to confound human perception, these technologies could also result in widespread “reality apathy,” whereby a contagion of forgeries results in people no longer trusting what they see and hear.
To some extent, this is already happening. The recent flood of weaponized disinformation surrounding Hurricane Helene relief is already so bad that FEMA was forced to create its first-ever “rumor response” website.
If all this doesn’t sound scary enough on its own, it’s terrifying to contrast the rate with which new AI and deepfake technologies are being released with the plodding pace with which the US government and Congress are working to address and regulate the myriad harms caused by these technologies.
The Internet – and in particular, a handful of ubiquitous social media platforms – has already transformed how the world communicates, transmits information, and conducts commerce. The digital age has surely brought many benefits to society, but it has also dramatically reshaped and amplified the capacity of illicit actors to predate victims, organize illicit campaigns, raise funds, recruit victims, and market illegal products. The web – as currently regulated – also provides anonymity for bullies, pedophiles, and imposters.
Current US laws governing cyberspace perpetuate inequities in the justice system that favor rich and powerful tech companies, immunizing them from liability while putting at risk the most vulnerable – children, minorities, the elderly, LGBTQ+ communities, and even endangered animals.
But it’s important to remember: a digital forger who has stolen someone’s identity to scam another person online is not expressing their protected right to free speech but committing a felony.
The Alliance to Counter Crime Online, which I co-founded, believes tech companies should face liabilities for hosting illicit and exploitative conduct on their platforms, just as any brick-and-mortar entity would face liability for hosting illegal and exploitative activity on their premises.
There is also an urgent need for industry-wide regulations – and regulators – to govern digital technologies, including social media algorithms, generative AI, and deepfake technologies.
These include laws specifically targeting runaway deepfakes, like the bipartisan No FAKES and No AI FRAUD Acts, which would provide protections for ordinary Americans and performers alike with regard to their identity and likeness. California and Texas have already outlawed sexually explicit digital forgeries, and seven other states are working on similar laws; but none of these address other harms caused by deepfakes, in particular their growing use in online fraud.
The legal regime we currently live under is not just spreading harm. It’s a regulatory failure. Try and imagine Boeing, Ford Motor Company, or a medical device company being allowed to roll out new products in the marketplace without being design-tested for safety. That’s precisely what’s happening – virtually every week – with digital products.
At the Alliance to Counter Crime Online, we support Sen. Elizabeth Warren (D-MA) and Sen. Lindsay Graham’s (R-SC) Digital Consumer Protection Commission Act, which would establish an independent, bipartisan regulator charged with licensing and policing tech companies.
There is also a need for broader legal reforms to protect vulnerable communities online, in particular children and the elderly, and to reform the scope of the immunities provided by Section 230 of the 1996 Communications Decency Act that encourage platforms to turn a blind eye to problems rather than address them. We also call on Congress to fix the broken WHOIS system that hamstrings efforts to find and stop abusers and fraudsters online.
The current legal regime we exist under is effectively a massive subsidy to the tech industry and platform developers, who are currently allowed to live test their products on the public and face scant accountability when those products cause harm. It’s time for that to change.
Computer code is just ones and zeros. How we code systems to operate is up to us. Policies and laws should set the framework, and they should be geared to protect the most vulnerable among us, not tech billionaires.