The Mad Men Are Now Math Men: A New Playbook for Political Marketing in the Age of AI
Asma Sifaoui / Jan 5, 2025In the 2024 US primaries, fake robocalls featuring AI-generated audio mimicking President Biden were used to mislead Democratic voters in New Hampshire. This incident underscored a new era in political marketing, where artificial intelligence (AI) is reshaping not only campaigns but also the very fabric of democratic engagement. As AI continues to evolve, its impact on political marketing raises urgent questions about regulation, ethics, and transparency. In short, AI could rewrite the campaign playbook, but are policymakers ready to keep up?
While AI has long been used in advertising, its role in political marketing has recently become more disruptive and transformative. No longer limited to automating tasks or personalizing ads, AI now enables hyper-targeted messaging, manipulative deepfakes, and unprecedented data-driven insights, all at scale. This technology exposes loopholes in the regulatory frameworks governing political advertising that leave voters vulnerable to manipulation. So, how do we regulate a system where AI influences everything from voter targeting to ad creation? What policies must we urgently reform to ensure ethical use?
Current regulations, focused primarily on traditional media, are ill-equipped to address AI's unique challenges. The distribution of highly persuasive, tailored content at scale, often without voters even realizing it, demands an overhaul of existing laws. Current regulations need more than tweaks; new rules that reflect AI’s transformative power are needed.
Strengthening Data Privacy Laws: Protecting Voter Information
The heart of AI-driven political marketing lies in the data that powers precise voter targeting and personalized content creation. Yet, US regulations remain fragmented and sectoral when it comes to data privacy, with no comprehensive federal privacy law to govern the vast amounts of data used in marketing or political campaigns. AI exacerbates the weaknesses of the broken regulatory system by exposing its fundamental limitations in handling the complexities of modern data use. It underscores the urgency of reforming current data governance frameworks. The 2016 Cambridge Analytica scandal demonstrated how data harvesting without adequate oversight can be used by those who seek to manipulate voter behavior by profiling individuals with targeted political content.
Ensuring Transparency in AI Usage: Informing Voters
The use of AI for ad targeting in political campaigns is not new. Still, the democratization of AI tools has made them more accessible, allowing even smaller campaigns to create and distribute AI-generated content. As a result, voters are increasingly exposed to messaging shaped by AI, often without their knowledge. The lack of mandated disclosure means that voters may be unaware of how such content may influence their perceptions and decisions.
Despite recent transparency measures from companies like Google and Meta requiring AI-modified political ads to carry disclosures, these efforts vary in implementation and enforcement across the broader digital media ecosystem. Some platforms scan for AI use and apply labels when deemed necessary, but inconsistencies persist due to the limitations of detection technologies. It is imperative that regulations mandate uniform AI disclosures across platforms, and real-time transparency portals should be implemented, ensuring voters understand how AI shapes political messaging.
Combatting Deep Fakes and Misinformation: Safeguarding Political Integrity
One of the most troubling uses of AI in politics is the creation of deepfakes that can convincingly mimic real people. These are often used to deceive voters. For example, in early 2024, fake robocalls featuring AI-generated audio mimicking President Biden were used to mislead Democrats in New Hampshire. Regulations on the use of deepfakes in political campaigns exist in some states, with nineteen having enacted laws. Texas led in 2019, banning deepfake videos intended to harm a candidate or influence elections. California followed, prohibiting "materially deceptive" media within 60 days of an election. In 2023, Minnesota and Michigan restricted AI-generated content within 90 days of an election, and Washington allowed candidates to sue over false synthetic media unless disclosed. States including New Mexico, Florida, Utah, Indiana, and Wisconsin now require disclosure of AI-generated content in political ads.
While these state initiatives are encouraging, comprehensive federal legislation is still needed to ensure uniformity in regulating AI-generated content in political campaigns. Such legislation should mandate that all AI-generated or AI-altered political content be clearly labeled across media platforms to allow voters to easily distinguish between real and manipulated media. In addition, campaigns and platforms should be required to disclose AI usage in real-time through publicly accessible transparency portals, repositories, or broader AI oversight initiatives, providing voters and regulatory bodies with immediate access to information on how AI tools are used in the ad.
Bridging the Regulatory Gap: Establishing Oversight for AI in Campaigns
State-level legislation on AI is fragmented, leading to a patchwork of policies across the country. While the Executive Order on AI under President Biden sought to create a national framework, it will likely be rescinded by the incoming Trump administration, which has signaled less interest in stringent AI regulation. Trump's second presidency could mean a rollback of previous efforts to establish ethical guidelines and stricter oversight of AI technologies in political campaigns. This could leave regulation to the states, exacerbating inconsistencies and creating more opportunities for misuse.
Without comprehensive federal regulation, campaigns may experiment with AI without meaningful oversight. A dedicated AI oversight body within the Federal Election Commission (FEC) could monitor the ethical use of AI in political campaigns and ensure compliance with transparency and data protection laws. The FEC's Draft Interpretive Rule clarifies that AI-generated media falls under existing fraudulent misrepresentation laws, ensuring that AI cannot be used to deceive voters. However, a broader federal framework is still necessary to address AI's evolving role in campaigns, ensuring consistent enforcement and tackling issues like transparency and deepfakes. Additionally, current FEC regulations need to be updated to account for new AI technologies used in political advertising to prevent the spread of manipulated or misleading content.
The Path Forward
AI’s role in political marketing is expected to grow, bringing risks and opportunities. Tools for personalized campaigns and deepfakes will grow more accessible, increasing the urgency for regulation. Federal efforts like Senator Amy Klobuchar’s (D-MN) “Honest Ads Act” and “Protect Elections from Deceptive AI Act” remain stalled due to industry lobbying and political disagreement.
Under Trump’s second presidency, a deregulation agenda is expected to hinder progress on AI oversight. The administration is likely to prioritize “industry-friendly” policies over accountability and transparency, which will leave a fragmented patchwork of state laws to address AI-related risks. While some argue that regulations could stifle innovation, the unchecked rise of AI in politics risks plunging the "Mad Men" of advertising into full-blown madness. The question isn’t whether regulation slows innovation—it’s whether pursuing innovation without accountability leaves democracy behind.