Home

Donate

Regulating Election Deepfakes: A Comparison of State Laws

CJ Larkin / Jan 8, 2025

The rise of deepfake technologies ushered in new fears about the appearance of misinformation in elections, including the 2024 Presidential Election. Over the past year, widespread concerns rose across party lines about the role that campaign material manipulated by artificial intelligence would play in an increasingly complex information landscape. For many, these fears became a reality in January 2024, when deepfake robocalls featuring President Biden urging voters not to vote in New Hampshire’s primary reached numerous registered Democrats across the state. New Hampshire prosecutors charged the individual responsible for these calls (who lacked formal ties to any political campaign) with 26 crimes, including voter suppression, intimidation, and impersonating a candidate. The Federal Communications Commission also fined the individual $6 million for violating call-spoofing and caller ID regulations.

For many states, including New Hampshire, this incident was a catalyst for passing legislation restricting the use of manipulated media in electoral campaigns. Eighteen states have enacted legislation to regulate the use of ‘manipulated’ or ‘deceptive’ media in campaign materials, encompassing digitally altered content such as synthetic audio, images, and videos designed to mislead or deceive. While states passed most of these between 2023 - 2024, Texas led the way in 2019 by banning manipulated media in campaign materials. Despite fears following the deepfake incident in New Hampshire, these laws have proven largely symbolic, with no apparent prosecutions for using manipulated media during the most recent elections.

At the same time, some of the laws have faced legal challenges. A Federal judge in California granted a preliminary injunction against the state’s election deepfake bill, citing First Amendment concerns. In Minnesota, a State Representative who had posted deepfakes of Vice President Harris on social media filed a lawsuit challenging the state’s election deepfake bill, citing violations of her “First Amendment rights to engage in political speech.” However, the case has since been overshadowed by accusations that a misinformation expert submitted an affidavit in defense of the bill written by artificial intelligence.

Despite minimal cases of these bills being used to combat the use of AI-manipulated campaign materials in the 2024 election cycle, it is clear that this issue is here to stay. This article aims to better understand these new regulations by comparing the similarities and differences between the bills on AI-manipulated media in elections passed in eighteen states. These bills are compared based on how they address three topics – their definition of what constitutes “manipulated media” in campaign materials, the regulations around using these materials, and the penalties for violating them.

Key Definitions

States use various terms to describe the manipulated content targeted by these bills. These terms reflect the diversity in how states conceptualize the issue, highlighting differences in their understanding of the technologies used, the forms of altered media, and the intended impact on audiences. In addition to defining manipulated media, state legislation often includes the tools and methods used to create the content.

Defining manipulated or deceptive media

State lawmakers used five primary terms to describe the main concern of the bills. These include synthetic media, materially deceptive media, deepfakes, fabricated media, and digital impersonation. Seven states – New Hampshire, Utah, Oregon, Washington, Wisconsin, Delaware, and Idaho – use the term “synthetic media” to describe the campaign materials they seek to regulate. Utah, for example, defines synthetic media as audio or visual content “that was substantially produced by generative artificial intelligence.”

Six states–New Mexico, Alabama, Hawaii, Michigan, New York, and California— use the term “materially deceptive media.” For example, California defines materially deceptive media as “audio or visual media that is digitally created…that it would falsely appear to a reasonable person to be an authentic record of the content depicted in the media.” The term “deepfake” is only used by three states – Minnesota, Texas, and Colorado. Colorado defines deepfake as an “image, video, audio, or multimedia ai-generated content that falsely appears to be authentic or truthful and which features a depiction of an individual appearing to say or do something the individual did not say or do.”

Only two states use terms unique to their legislation. Indiana uses the term “fabricated media,” which they define as an “audio or visual recording of an individual's speech, appearance, or conduct that has been altered without the individual's consent such that…a reasonable person would be unable to recognize that the recording has been altered.” Arizona uses the term “digital impersonation,” defined as “synthetic media, typically video, audio or still image that…has been digitally manipulated to convincingly replace one person's likeness or voice with that of another using deep generative methods and artificial intelligence techniques,” and “…was created with the intention to deceive or lead reasonable listeners or viewers into believing that the content is authentic [and a] …true and accurate depiction of…something the impersonated person said or did.”

Defining the Tools, Methods, and Media Forms

Despite differences, the definitions generally seek to address three core aspects of manipulated media: the technology or tools used (e.g., artificial intelligence, generative AI) used to create the media, the media forms being altered (audio, video, or images), and the intent and impact of the manipulation on the public. Most of these bills also reference the technologies involved, with artificial intelligence being the most frequently cited. Every state except for Texas, Minnesota, and New York includes a direct reference to a tool or method used to create manipulated campaign media.

While some states, like New Mexico, provide explicit definitions of AI within their legislation, others, such as Wisconsin, refer to AI without defining it. Legislators also use “Generative Adversarial Networks (GANs)” and digital technologies, although no state explicitly defines GANs in its laws. Texas’s law defines “deepfake” (its term for manipulated media) as “a video, created with the intent to deceive, that appears to depict a real person performing an action that did not occur in reality” but does not provide any further information within the bill as to how that video is created.

Minnesota’s legislation does not define the specific AI tools, stating that the media needs to be created in a way that is “substantially dependent upon technical means, rather than the ability of another individual to…impersonate (another) individual.” New York does not define the form of manipulated media it attempts to regulate and does not reference any creation tools within the legislation.

Creation tools

For the states that do directly reference creation methods of manipulated media, the most commonly cited tools are Artificial intelligence (NM, AL, HI, UT, OR, WI, CO, MI, and AZ), Digital technologies (HI, NH, OR, WA, DE, ID, and CA), and Generative adversarial networks (NH, HI, WA, DE, and ID).

Of the bills that cited artificial intelligence as a creation tool, only Oregon, Wisconsin, and Arizona did not define artificial intelligence. For example, Wisconsin defines synthetic media as “Audio or video content that is substantially produced in whole or in part by means of generative artificial intelligence” but does not include a definition of artificial intelligence within the legislation. New Hampshire and Hawaii provide separate definitions for artificial intelligence within their legislation.

These bills defined AI as a separate term, distinct from manipulated media. For example, New Mexico’s legislation defined AI as “a machine-based or computer-based system that through hardware or software uses input data to emulate the structure and characteristics of input data in order to generate synthetic content, including images, video or audio.” This definition is separate from the law’s definition of “materially deceptive media,” which references AI.

All of the bills that cited “digital technologies” as a creation tool cited at least one additional creation tool (such as AI or generative adversarial networks.) The only state that just referenced “digital technologies” as a creation tool is California, which defined materially deceptive media as “audio or visual media that is digitally created or modified, and that includes, but is not limited to, deepfakes and the output of chatbots, such that it would falsely appear to a reasonable person to be an authentic record of the content depicted in the media.”

None of the states that cite generative adversarial networks as a creation tool define it within their legislation. Washington, Delaware, and Idaho also omit any definitions relating to the tools used to create manipulated media.

Forms of media

Most of the legislation defines manipulated media to include three primary forms of media: audio, image, and video. Most states explicitly list these three types of media, such as Delaware, which defines synthetic media as “an image, an audio recording, or a video recording of an individual’s appearance, speech, or conduct...” Some states take a broader approach, referring to “audio or visual” media without further specification. Indiana’s definition of manipulated media says it can be any “audio or visual recording of an individual's speech, appearance, or conduct that has been altered without the individual's consent…”

Idaho, Texas, and Wisconsin are the only three states that did not include all three primary forms of media within definitions of manipulated media. Wisconsin and Idaho’s definitions of synthetic media only address video and audio content. Wisconsin refers to “audio or video content that is substantially produced …by artificial intelligence,” and Idaho refers to an “audio recording or a video recording of an individual's speech or conduct…” Texas limits its definition of covered media, stating it is “a video, created with the intent to deceive, that appears to depict a real person performing an action that did not occur in reality.”

Intent and impact

Thirteen states’ definitions of manipulated media say the content must be believable to a “reasonable person.”. For example, Alabama states that the materially deceptive media must cause a “reasonable viewer or listener (to) incorrectly believe that the depicted individual engaged in the speech or conduct depicted.” Similarly, Minnesota says that the deepfake must be “so realistic that a reasonable person would believe it depicts speech or conduct of an individual who did not in fact engage in such speech or conduct.” New Mexico approaches this standard by stating that the person distributing the media “intends the distribution to result in altering the voting behavior of electors…by misleading (them) into believing that the depicted individual engaged in the speech or conduct depicted, and the distribution is reasonably likely to cause that (belief.)

Many of the states' definitions of manipulated media rely on a reasonableness standard. Colorado includes this standard in its definition of “AI-generated content,” which covers any media that is “substantially created or modified by generative artificial intelligence such that the use of generative artificial intelligence alters the meaning or significance that a reasonable person would take away from the content.” Delaware sets a similar qualification in its bill’s definition of “deepfake,” which is defined as synthetic media that “ appears to a reasonable person to depict a real individual…doing something that did not actually occur in reality” or “…provides a reasonable person a fundamentally different understanding or impression of the appearance, action, or speech than a reasonable person would have from an unaltered, original version…”

Four states – New Hampshire, Utah, Wisconsin, Texas, and New York – omit the requirement for manipulated media to be believable to a “reasonable person.” While these states do not address whether the media needs to be believable to the public, some still require that the creators of the media intend for it to be believable. Wisconsin, for example, states that the bill applies to any “audio or visual communication intended to influence voting that contains synthetic media” but does not mention anywhere if this media needs to be believable to a “reasonable person.”

Regulations

State laws addressing manipulated media impose restrictions, such as requiring disclosures or regulating timing, but they do not completely ban their use in electoral campaigns. The state laws mandate at least one of the following measures: requiring the use of disclaimers on any content that is manipulated or artificially generated, restricting the use of manipulated or artificially generated content to a specific time window–often tied to the proximity of Election Day– or a combination of both. The only deviation from this structure is Idaho, which technically bans any use of manipulated media in campaign materials but provides for an “affirmative defense for any action brought…[if] the electioneering communication containing synthetic media includes a disclosure stating, "This (video/audio) has been manipulated...”

Disclosure requirements

Sixteen states require that the use of manipulated media in election campaign materials be disclosed to the viewer. There is some variance in how states require these disclaimers. Many states provide the specific verbiage that must be used. Delaware, for example, requires a label that says, “This (image/video/audio) has been altered or artificially generated.” New Hampshire’s legislation prohibits synthetic media and deepfakes from being used “within 90 days” of a candidate appearing on the ballot. The prohibition does not apply if there is a disclaimer stating, “This _____ has been manipulated or generated by artificial intelligence technology and depicts speech or conduct that did not occur."

Some states are more general and provide the message that the disclaimer needs to convey rather than the actual verbiage that needs to go into the disclaimer. For example, Washington’s legislation requires that the disclosure must state that the media has been manipulated and adhere to specific visual and audio requirements, while Hawaii says that manipulated media must include a “disclaimer informing the viewer that the media has been manipulated by technical means and depicts appearance, speech, or conduct that did not occur.” Alabama, in contrast, provides a specific statement but clarifies that creators of manipulated media do not have to use that specific language to comply with the law. The bill states that the creator must include “a disclaimer in any presentation of the media informing the viewer both that the media has been manipulated by technical means and depicts speech or conduct that did not occur.”

Some states also have requirements for how the disclaimer must appear in the manipulated media. This typically pertains to things such as the font size of the disclaimer, how long it appears on screen, and the speed at which it is given if the disclaimer is in audio form. For example, New Mexico’s legislation says that the “text of the disclaimer shall appear in a size that is easily readable,” and Alabama says that the disclaimer is written in “letters in a size that is easily readable by the average viewer.”

Other states get more specific with the font size for disclaimers – California’s legislation says that for visual media, the disclosure must “appear in a size that is easily readable by the average viewer and no smaller than the largest font size of other text appearing in the visual media. If the visual media does not include any other text, the disclosure shall appear in a size that is easily readable by the average viewer.” Colorado’s size requirements are written nearly identically to California's, requiring a disclosure that “appears in a font size no smaller than the largest font size of other text appearing in the visual communication.” If there no other text in the communication, the statement must appear in a “font size that is easily readable by the average viewer.”

Similarly, many states give requirements for how disclaimers must appear on manipulated audio media. New Hampshire’s legislation requires that for manipulated audio, the disclosure must be “read in a clearly spoken manner and in a pitch that can be easily heard by the average listener, at the beginning of the audio, at the end of the audio, and, if the audio is greater than 2 minutes in length, interspersed within the audio at intervals of not greater than 2 minutes each.” Delaware’s requirement for manipulated audio media is written in the same way. New Mexico is less specific, but still states that disclaimers on manipulated audio need to be “clearly spoken during the advertisement.”

Restrictions on timing

Seven states (NH, AL, HI, MN, CO, CA, and DE) all contain requirements for when unlabeled manipulated campaign material can be published (in addition to disclaimer specifications.) It is important to note that none of these laws prohibit the publication of manipulated media but rather allow for unlabeled manipulated media to be published before the provided date. All of the dates provided within the legislation are given as days from the election rather than actual calendar dates except Hawaii, which bans the publication of unlabeled media “between the first working day of February in every even-numbered year through the next general election.”

Two states that do not require disclosures – Minnesota and Texas – do have restrictions around when campaign ads, including manipulated media, can be published. Minnesota’s legislation makes it illegal for manipulated media relating to an election to be disseminated “within 90 days before an election” if it is made “without the consent of the depicted individual” and “with the intent to injure a candidate or influence the results of an election.” Texas’ requirements are simpler and only apply to a person who “creates a deepfake video” and “causes the deepfake video to be published or distributed within 30 days of an election.”

The other states prohibit the publication of the media between 30, 60, 90, or 120 days before the election the candidate depicted is running in. Some states have different dates depending on if it is a primary or general election For example, New Mexico prohibits the dissemination of unlabeled manipulated media within “thirty days before the primary election or sixty days before the general election.” In comparison, Colorado prohibits deepfakes of candidates “sixty days before a primary election or ninety days before a general election.” New Hampshire and Alabama forbid publishing unlabeled manipulated material if the distribution occurs within “90 days” of an election. California prohibits posting unlabeled manipulated election media beginning “120 days before the election in California and through the day of the election” (or through the 60th day after the election in the case of content that depicts or pertains to elections officials).

Penalties

Penalties for violating these laws vary widely. Each state has a different penalty for violating its restrictions on publishing manipulated media around elections. Across the eighteen bills, penalties include injunctive relief, criminal charges (felony and misdemeanor,) and civil penalties (i.e., fines.) Injunctive relief (i.e., having a court order that the manipulated election media be removed due to harm to the candidate) is the most common enforcement mechanism across passed legislation. Fourteen states (NM, NH, AL, IN, HI, UT, OR, WA, WI, MN, DE, ID, NY, and AZ) have included injunctive relief as an enforcement mechanism within their passed legislation.

Injunctive relief

States allow different parties to bring forward injunctive relief in response to the manipulated media. Many states allow the government (in the form of an Attorney General, District Attorney, or County Attorney) or the party depicted in the manipulated campaign media to bring forward the injunction. For example, Hawaii allows for a “cause of action for injunctive or other equitable relief” to be brought forward by the Attorney General, the campaign spending commission, a county attorney or county prosecutor, the depicted individual, a candidate for nomination or election to a public office who is injured or is likely to be injured by dissemination of materially deceptive media, or any organization that represents the interest of voters likely to be deceived by the distribution of materially deceptive media.” New Hampshire, on the other hand, only allows for injunctive relief to be sought by the “candidate or election official whose appearance, action, or speech is depicted through the use of a deceptive and fraudulent deepfake in violation.”

Criminal charges

Most of these states include one or more additional penalties in addition to injunctive relief. Some states include the option to charge someone with either a misdemeanor or a felony, depending on their number of violations (e.g., NM, AL, IN, and HI). New Mexico’s legislation states that anyone who “violated the prohibitions provided…is guilty of a crime as follows: (1) for a first conviction, a misdemeanor; and (2) for a second conviction, a fourth-degree felony.” Alabama is similar, saying that a violation results in “a Class A misdemeanor, except that a second or subsequent conviction within five years is a Class D felony.”

Minnesota is the only state that only includes criminal felony charges (no misdemeanors) alongside the option for injunctive relief. The legislation states that a person convicted of violating the law may be sentenced “to imprisonment for not more than five years or payment of a fine of not more than $10,000, or both” if the violation is “within five years of one or more prior convictions under this section.” If the “person commits the violation with the intent to cause violence or bodily harm,” they may face “imprisonment for not more than one year or payment of a fine of not more than $3,000, or both.” In other cases, they may face “imprisonment for not more than 90 days or payment of a fine of not more than $1,000, or both.”

Civil penalties

Colorado is the only state allowing civil penalties without the option to file an injunction. The civil penalty amounts are tiered, starting at “at least one hundred dollars” or “at least ten percent of the amount paid or spent to advertise, promote, or attract attention to a communication,” depending upon the specific circumstances of the communication. Oregon and Utah do not appear to include any criminal charges, in addition to the injunctive relief, but do include the option to bring civil penalties. Oregon’s law states, “upon proof of any violation of this section, the court shall impose a civil penalty of not more than $10,000.” Utah allows the courts to “impose a civil penalty not to exceed $1,000 against a person for each violation…that the court finds a person has committed.”

Some states, such as Washington and Idaho, allow for some flexibility in what sort of charges or legal remedies are allowed, depending on the context of the incident and other factors. Idaho and Washington both include the stipulation that “the provisions of (these penalties) do not limit or “preclude a plaintiff from securing or recovering any other available remedy.”

This post is part of a series examining US state tech policy issues in the year ahead.

Authors

CJ Larkin
CJ Larkin is an MPP student and Tech and Public Policy Scholar at Georgetown University. Previously, CJ spent two years as a Govern for America Fellow working on broadband and technology ethics.

Related

We Don't Need Google to Help "Reimagine" Election Misinformation

Topics