Home

Deepfake Dilemma: Urgent Measures Needed to Protect American Institutions

Aaron Poynton, Siwei Lyu / Jul 26, 2024

Within hours of the assassination attempt on former President Donald Trump, AI-manipulated images began circulating on the internet, fueling conspiracy theories, write Dr. Aaron Poynton and Dr. Siwei Lyu.

A composite of a still image drawn from a DeSantis campaign video that included a manipulated element showing former President Donald Trump embracing Dr. Anthony Fauci, former Chief Medical Advisor to the President of United States.

The rapid advancement of deepfake technology has emerged as an urgent threat to the integrity and security of American institutions, particularly as the 2024 elections approach. The sophistication of AI-generated deepfake technology has accelerated to a point where discerning synthetic content from reality is difficult even for trained observers. This development underscores the immediate vulnerabilities of American institutions to deepfakes, necessitating the pressing need for comprehensive legislation and enforcement, the integration of technological solutions, and heightened public awareness.

Deepfake technology, which uses artificial intelligence to create realistic but fake images, videos, and audio, has seen remarkable advancements since the last election cycle. Four years ago, deepfakes, while concerning, could often be detected by the human eye due to subtle inconsistencies in the generated content. These included unnatural facial movements, odd lighting effects, or lip-sync mismatches. Today, however, the latest deepfake algorithms produce content nearly indistinguishable from genuine footage. Moreover, these realistic but synthetic media are being used with frightening frequency. According to a recent study by Deeptrace, the number of deepfake videos online doubled every six months between 2019 and 2021.

The improvement in generative AI models, including adversarial networks (GANs), transformers, and diffusion models, has dramatically increased the quality of deepfakes. For instance, GANs involve two neural networks in a relentless duel: one generates fake content while the other critiques its realism, leading to iteratively enhanced and virtually flawless deepfakes. The diffusion models learn to create realistic deepfakes from random noises by reversing the process by which real content is gradually corrupted and turned into noise. Research highlights how these advancements diminish the gap between real and fake content, emphasizing the urgent need for advanced detection methods.

The vulnerabilities of American institutions to deepfakes are multifaceted. In the political arena, deepfakes can be weaponized to manipulate public opinion, undermine trust in democratic processes, and spread disinformation. For example, within hours of the assassination attempt on former President Donald Trump, AI-altered images began circulating on the internet, fueling conspiracy theories. Now, picture a scenario where a deepfake video depicting a political candidate making inflammatory remarks goes viral on the morning of an election, causing irreversible damage because the fake video will spread faster than the truth. Democracy has been tampered with and the future altered, highlighting the immediate need for stringent measures against such technological abuses.

Beyond politics, deepfakes threaten the judiciary, law enforcement, and the media. For instance, fabricated evidence could be used to incriminate individuals or sway judicial outcomes unjustly. As a recent example, a Baltimore area principal was placed on administrative leave and put under investigation after a deepfake went viral of him making racist and antisemitic comments. Additionally, law enforcement agencies might be misled by fake surveillance footage, and media outlets could inadvertently amplify false information, eroding public trust in journalism.

Addressing the deepfake threat requires a multi-pronged approach involving federal legislation and regulation with enhanced enforcement, technological innovation, and public awareness. In the absence of federal legislation, federal regulatory measures should be established to provide specific guidelines and rules that deter the malicious use of deepfakes. Ultimately, the patchwork of state laws and ongoing legislative efforts should be harmonized into comprehensive federal legislation. It should impose strict penalties for creating and distributing harmful deepfake content while providing mechanisms for victims to seek redress. Additionally, it is crucial for law enforcement and prosecutors to actively enforce these laws to ensure perpetrators are held accountable and to reinforce the seriousness of these offenses.

Cutting-edge technological solutions, especially those leveraging AI, multimodal approaches, and blockchain, offer promising avenues for ensuring content authenticity and provenance. Deep learning AI models are becoming more sophisticated in detecting deepfakes by learning distinguishing features from labeled real and fake content. Multimodal technology, which synthesizes diverse data modalities, enhances the detection and mitigation of deepfakes by leveraging cross-modal correlations and inconsistencies to identify and flag manipulated content, leading to more robust and reliable predictions compared to monomodal analyses​.

Additionally, pioneering research in deepfake detection has developed innovative techniques that identify subtle artifacts left by deepfake algorithms. Recent research focuses on using physiological signals, such as heartbeat and pulse, which are inadvertently captured in videos but are challenging for deepfake algorithms to replicate accurately. These physiological signals provide a robust means of differentiating real from fake content. Blockchain's decentralized and immutable ledger can verify the origin and integrity of digital media, presenting a formidable barrier against the spread of deepfakes. By embedding cryptographic hashes of original content into the blockchain, any tampering or alterations become immediately detectable, ensuring that only authenticated content circulates. Furthermore, technologies such as digital watermarks and media fingerprints ensure content authenticity and clear provenance.

Integrating these detection technologies into mainstream distribution platforms is crucial. Social media companies and content-sharing platforms must incorporate advanced deepfake detection tools to filter or label harmful content before it spreads. Collaboration with AI researchers and continuously updating detection algorithms can enhance these efforts. Additionally, public education plays an essential role. Despite the widespread publicity and growing threat that deepfakes pose to society, many people remain unaware of what deepfakes are and lack the ability to detect them reliably. By raising awareness about the existence and risks of deepfakes through educational campaigns, we can empower individuals to become more discerning consumers of digital content. This combined approach will significantly reduce the likelihood of widespread deception and help maintain the integrity of our digital information landscape.

The rapid evolution of deepfake technology presents a profound and immediate threat to the integrity of American institutions, most urgently the 2024 presidential election. The near indistinguishability of modern deepfakes underscores the critical need for robust legislation, stringent enforcement, and advanced technological solutions to ensure content authenticity. Combining these measures with widespread public awareness and education is crucial in combating the deepfake menace. By taking proactive measures now, we can safeguard institutions and uphold the trust and reliability that form the bedrock of our democratic society.

Authors

Aaron Poynton
Dr. Aaron Poynton is a businessman, entrepreneur, and consultant. He is the CEO of Omnipoynt Solutions, a consulting firm specializing in 4IR technology strategies, and he’s CCO of A3 Global, a next-generation company focused on the future of mobility. Aaron holds a doctorate and earned three master...
Siwei Lyu
Siwei Lyu is the SUNY Empire Innovation Professor at the Computer Science and Engineering Department of the University at Buffalo. Siwei obtained his PhD degree in Computer Science from Dartmouth College. His research work focuses on the development of counter technologies to various types of media ...

Topics