Swift Justice? Assessing Taylor's Legal Options in Wake of AI-Generated Images
Kevin Frazier / Feb 27, 2024Kevin Frazier, an Assistant Professor at St. Thomas University College of Law and Senior Research Affiliate with the Institute for Law and AI, spoke to various experts about what recourse exists for victims of nonconsensual intimate imagery.
As made clear by the recent dissemination of sexually explicit AI-altered images of Taylor Swift, generative AI tools can be used in patently destructive ways. As reported by The Verge, one such synthetic Swift pic "attracted more than 45 millions views, 24,000 reports, and hundreds of thousands of likes and bookmarks" on X. Seventeen hours later, X got around to suspending the verified user who initially shared the image. Given the popularity of Swift, it comes as no surprise that this episode has (1) validated the concerns of experts, (2) increased awareness of the anti-social combination of poor content moderation and terrible uses of generative AI tools, (3) amplified calls for legislative action and (4) prompted calls to lawyers asking whether Swift can pursue legal action against platforms that failed to take timely action against the harmful and false content.
This article adds to this ongoing conversation by providing expert analysis from law professors who have been studying privacy and speech law well before the current age of AI. Via several email exchanges with these scholars, I asked them to dive into the causes of action available to Swift and other victims, as well as legal courses available to the US government.
Copyright Claims
First, though Swift’s odds of successfully holding users and platforms accountable seem slim, her case may offer the best chance to lay the foundation for future successful litigation. Scholars listed a litany of claims available to Swift including state and federal law claims. More specifically, Michael Goodyear, Acting Assistant Professor of Lawyering at NYU Law, mentioned that Swift had several potentially viable claims. He first analyzed her potential copyright claims:
Copyright law is unlikely to be helpful here because although the AI systems that generated the deepfakes were likely trained on photos of Swift, she probably does not own the copyrights in them.
Wayne Unger, Assistant Professor of Law at Quinnipiac University School of Law, described the alternative scenario:
[Swift] may hold the copyrights to images and any other “training data” that the developers used to train the AI that produced the fake images. Assuming she does hold the copyrights to any “training data” that the developers used, then this would be an improper use of copyrighted material, and perhaps, using copyrighted material for commercial purposes.
Goodyear gave some background as to why it may be more likely that Swift lacks such copyrights and, therefore, will probably not have a meritorious copyright case:
This has been an issue before with celebrity photos circulating online, where the photographer, not the celebrity, owns the copyright and therefore has the power to restrict the photos’ dissemination (or not). In fact, in cases involving celebrities such as Emily Ratajkowski and Bella Hadid, the celebrities were sued for using photos of them[selves] without the photographers’ permission.
Still, he noted that even if Swift lacks the copyrights, she may have some means of recourse:
But ownership of copyrights in the photos or collaboration with the owners could allow Swift to submit takedown notices to remove infringing deepfakes developed from those photos. Platforms may be incentivized to remove the deepfakes in response, as is required to maintain their safe harbor from infringement liability under the Digital Millennium Copyright Act (“DMCA”). There may be no infringement; there are currently several lawsuits pending across the country on whether training a generative AI system on a copyrighted image is fair use or not. Under the DMCA, platforms do not have the freedom to make determinations about what is infringing or not; once content is reported as infringing by the rights owner, the platform must remove it or lose this safe harbor and risk being held liable if there is infringement.
Eric Goldman, Associate Professor of Law and Director of the High Tech Law Institute at Santa Clara University School of Law, agreed that Swift’s copyright claims likely would run aground on the assumption that she is not the copyright owner of the image in question.
Goldman also challenged the idea of referring to these images as “deepfakes.” He noted that he generally refrains from using the term because of its imprecision. In Goldman's opinion, the term "commingles a wide range of potential factual and legal scenarios." In other words, it may be outcome determinative in a legal setting if an image qualifies as "inauthentic content that the viewer would be inclined to believe is authentic" or as inauthentic content unlikely to deceive users--Goldman's hunch was that the Swift images fell into the latter bucket.
The challenge of precisely labeling this sort of problematic content has also complicated efforts to reduce the spread of nonconsensual porn or revenge porn. Goodyear's analysis of state laws on the topic, for instance, revealed that some states, such as New York, have had to consider amending their revenge porn laws to explicitly refer to and ban deepfakes.
Privacy-related Claims
Goodyear, Unger, and Goldman all flagged claims arising from invasions of Swift’s privacy as a possible basis for legal recourse.
The most compelling claim came from Goodyear, who assessed Swift’s right of publicity claims:
At present, rights of publicity are state law, so they vary somewhat depending on the states. Swift owns several (multimillion dollar) properties in Nashville, NYC, LA, and Rhode Island. If we assume Swift would bring her claim under Tennessee law, she has a right over her name, photograph, or likeness, but only so far as it is used for a commercial purpose. So Swift may be able to successfully allege misappropriation of her right of publicity if the deepfakes are used in commerce (e.g., for advertising purposes, or as commercial products on adult content websites). But more likely the deepfakes are being circulated without a commercial purpose.
More generally, Meredith Rose of Public Knowledge pointed out that many state right-of-publicity frameworks have significant limitations that will render them unhelpful to residents facing a similar situation as Swift. For instance, Rose flagged that many such laws only apply to the likenesses of deceased individuals.
Even if Swift were to prevail on such claim, Goodyear questioned whether platforms would then have a duty to remove the content in question:
It’s also unclear what obligations platforms would have to remove deepfakes under right of publicity claims. There is a split between courts over whether platforms are immune from any right of publicity claims based on user content under Section 230. If Section 230 does not apply to right of publicity claims, platforms could face secondary misappropriation claims for leaving commercial deepfakes up.
Goldman had a less bullish outlook on privacy-related claims, concluding that “if the image is inauthentic, then most privacy violations don't apply because the image isn't true.” He also listed Intentional Infliction of Emotional Distress as a potential cause of action. “Though this claim is generally hard to win,” Goldman pointed out that “inauthentic sexual content may be a paradigmatic example of this doctrine.”
This is just a fraction of the privacy claims available to Swift, according to these scholars. Goldman, for instance, briefly analyzed the merits of a defamation claim. He argued that this would only have legal legs “if viewers believed the content was true.” His scan of the case law, though, indicates that “[c]ourts are increasingly skeptical that online readers/viewers actually believe what they see online, and that has made it harder for plaintiffs to win defamation cases.”
Goldman and Goodyear also highlighted state laws related to the dissemination of "nonconsensual pornography.” Those laws vary substantially from state to state, as noted by Goodman, and, in many cases are cabined to a narrow set of content such as photographs or recordings of an individual’s intimate parts, according to Goodyear. Nevertheless, these state and federal laws may soon make this a more viable cause of action to those in situations similar to Swift. Here’s what Goodyear had to say:
Some states, including New York (another of Swift’s homes is in NYC) have started to amend their revenge porn laws to explicitly ban deepfakes. Congress is also considering the NO FAKES Act, which would grant individuals an exclusive right over digital replicas of them—although because the right is licensable, its efficacy could be seriously undermined. At present, platforms would likely not be liable for hosting unlawful revenge porn images, as they would be protected by Section 230. The NO FAKES Act seeks to counter this by holding platforms liable for violating one’s digital right if they know that the digital replica is being used without permission.
In summary, many of these areas of law have been developing for centuries; as novel as generative AI tools may be, from a legal perspective many of these claims are anything but new. Copyright claims captivated the nation as early as 1865--as indicated by a dispute over the play, Our American Cousin. Likewise, celebrities have been suing over unauthorized uses of their name and likeness since at least 1953. When I asked Goldman whether any legislative proposals could meaningfully afford individuals like Swift better odds of holding creators and hosts accountable for sharing synthetic content, he responded that “so many existing laws already apply, it's not always clear if there's a major legal gap or if any gap can be corrected in a manner consistent with the First Amendment.”
Still, when I nudged the scholars to assess whether Swift should pursue legal action, Unger and Goodyear thought that Swift’s profile and resources may give her a better chance than others. Unger, qualifying that he did not know all the facts at issue, offered this overview:
This may be the best opportunity to bring the case because there are several aspects to the case that are indisputable. For instance, does Swift have a likeness that can be tied to non-consensual commercial activity? Yes. Does she own the copyrights to much of her media (e.g., pictures and films) that may have been used to train the AI? Likely, yes.
Practically, this is also a good opportunity because Taylor Swift may be the strongest advocate for these legal issues. First, she can afford the litigation, including top lawyers in this subject matter. Second, she has exhibited a willingness to spend money on issues that she cares about regardless of whether she’ll collect any damages. Third, her public profile will elevate the issue, which in turn, may generate the legislative interest, such as Congressional hearings.
Goodyear echoed much of Unger’s general sentiments–writing that she “is in a unique position to call attention to her victimization and has the financial capacity to support any proceedings”--but cautioned that:
...there are many reasons she may choose not to do that: she may not win in court (depending on the legal theories), she may not want to spend the time and hassle even if she could win, she may not be satisfied with the outcomes even if she wins, she faces the risks of the Streisand Effect, and more.
Swift’s Potential to Mobilize Legislative Responses
In addition to Swift’s potential to address these issues through the courts, she may also have a chance to spur legislative responses to such content. Support for Swift taking on an advocacy role was especially voiced by Melissa Heikkilä of the MIT Tech Review. Heikkilä penned an open letter to Swift asking her “to be furious” about the spread of “deepfake porn.” Importantly, Heikkilä did not explicitly urge Swift to seek legal recourse but to instead use her platform to bring about regulatory responses. She anticipates that Swift’s advocacy would fall on receptive ears given the increasing attention to these issues by legislators, a conclusion that Ariana Aboulafia and Belle Torek also arrived at in Tech Policy Press.
Swift’s saga may have already spurred more legislative attention—as noted by Adi Robertson of the Verge, a bipartisan group of senators introduced the Disrupt Explicit Forged Images and Non-Consensual Edits (“DEFIANCE”) Act just a few days after they caught wind of the use of generative AI tools against Swift. The bill would provide individuals like Swift with a civil right of action against those who knowingly produced or possessed the image with an intent to spread it.
Notably, the DEFIANCE Act adds to a pre-existing backlog of related legislative proposals that may soon garner even more support on Capitol Hill. Sarita Schoenebeck, Associate Professor of Information at the University of Michigan’s School of Information, flagged the DEEPFAKES Accountability Act, put forward in the House by Rep. Yvette Clarke (D-NY), and the Preventing Deepfakes of Intimate Images Act, put forward by Rep. Joe Morelle (D-NY), as other pieces of legislation worth tracking. Schoenebeck noted that both of these bills specify when deepfakes may qualify as nonconsenual sexual content and provide victims certain causes of action. Legislators and advocates, though, should avoid acting too hastily, warned Schoenebeck. She cautioned that legislation that applies to generative AI generally may risk stifling beneficial uses of such tools.
On the whole, scholars seemed optimistic that initial bipartisan support for these bills could lead to legislative action sooner than later. And while such action is important, Schoenebeck pointed out that legislation is only the first step toward reducing the spread of such content; the second and--in Schoenebeck’s estimation--more difficult step is enforcement. Here’s her brief description of potential enforcement challenges:
The scale of deepfake sexual content is large and presumably increasing. It’s not clear how platforms will be able to respond to a large increase in takedown requests if required to, or how effectively the legal system will be able to handle criminal cases. We already know that in offline contexts, cases related to sexual harms can be slow, traumatizing, and often ineffective for victims. [For effective enforcement], we'd need an increase in resources to address digital cases of sexual harm on a large scale.
This list makes clear that advocates for legislation should push legislators on their plans to follow through with monitoring the implementation of any regulation. Moreover, Schoenebeck’s concerns suggest that advocates should urge Congress to include a diverse set of stakeholders in monitoring and evaluating enforcement to see if and when amendments may be justified.
Criminal Prosecution by the Government
Goldman and Unger raised the possibility of the government criminally prosecuting the creators and disseminators of synthetic content--specifically revenge porn. As summarized by the National Association of Attorneys General, “48 states plus the District of Columbia and Guam have criminalized revenge porn.”
Unger stressed that such prosecution would face stiff headwinds from the First Amendment:
If government moves to prohibit the production of generative AI works, the producers/developers may claim they have a First Amendment right to produce this “art,” and I predict the developers will prevail. However, courts may consider the Swift images as obscene, which would limit the First Amendment protections under current case law. Of course, that wouldn’t apply to all generative AI deepfakes—just those that breach the line into obscenity.
Goldman agreed that the First Amendment would pose a hurdle. Taking the perspective of the US Department of Justice, he added that the government would need to “carefully consider the prerequisites of any crime, as well as the First Amendment overlay, in assessing whether a criminal investigation makes sense for their office.” All in all, from Goldman’s quick review of a limited set of facts, he declared, “[I]t's not clear to me that there are any ‘slam dunk’ options available to them.”
Means to Shape Platform and Lab Behavior
Swift’s experience has also sparked analysis of the voluntary means available to platforms and labs to stop or slow the spread of synthetic content that violates their respective content moderation standards, the law, or both. OpenAI’s recent promulgation of rules related to the use of its tools in an electoral context indicates that labs can, if they opt to, take steps that will theoretically reduce the likelihood and severity of harms caused by AI-generated content. In practice, researchers question whether content labels disclosing disinformation or content provenance alter user behavior.
Those limits aside, according to the Royal Society, “Digital content provenance is an imperfect and limited – yet still critically important – solution to the challenge of AI-generated misinformation.” MIT Technology Review’s Heikkilä was even blunter in her assessment of technical solutions--pointing out that “there is no neat technical fix” for the creation and spread of “nonconsenual deepfake porn” and lamenting that even if such technical solutions did exist “[t]he tech sector has thus far been unwilling or motivated to make [such] changes[.]”
From my read of the empirical research and scan of the legal landscape, rather than attempt to remove or deprioritize content that violates the law or content moderation standards, platforms and labs should instead stimulate the creation and spread of verifiable and reliable information. This approach would align with the idea of a “Right to Reality”–making sure the public has easy access to the information required to exercise their personal autonomy. Notably, OpenAI took a step in this direction by prompting users who ask ChatGPT for election related information to go to CanIVote.org. Presumably, labs and platforms could expand these efforts through changes to their platform designs and by identifying more reliable sources.
Notwithstanding my own views on the limits of watermarking and content labeling, I asked Goldman to analyze the merits of some sort of legislative safe harbor for labs that watermark their content with the aim of reducing the spread of Swift-esque content or easing legal action against those who share such content.
Goldman questioned the viability of this effort. His doubts were grounded in technical limitations more so than legal concerns: “I think any watermarking law would be worth evaluating only after the technology is reasonably mature. I don't think we're there yet.”
If watermarking and provenance approaches fail or fall short of the expectations of platforms, users, and regulators, Schoenebeck detailed another step available to platforms: making it harder to share such content in the first place by adding friction to posting. The current design of many platforms gives “much of the control of content posted online ... to the person who posts it rather than the person who is in it,” wrote Schoenebeck. Platforms could reallocate some of this power by using content provenance tools to identify individuals in a post and provide those individuals with a chance to consent to the post. Schoenebeck acknowledged that this would lead to a trade-off–diminishing the rate of content production but increasing the likelihood that posts comply with a platform’s standards and the law. Given the proliferation of deepfakes, Schoenebeck argued this compromise warrants consideration.
In the wake of the legislative efforts described above and a recent berating of platform leaders during a congressional hearing, these sorts of voluntary efforts to shift platform norms and alter platform designs may be increasingly appealing to Facebook, X, and the like.
Summary
Ultimately, Swift likely does not have viable legal claims under current law. Still, if anyone could fund and persevere through litigation that results in common law adjustments in favor of victims, it may be her. Swift also may be in the best position to rally Congress to act on one of the many proposals for a more comprehensive response to such content.
That said, given that courts may not serve as a source of legal recourse and Congress’s poor track record of responding to issues posed by emerging technologies, advocates for some sort of justice in Swift’s case (and the many others that have and will emerge) may want to prioritize legislative action at the state level–in the same way state legislatures have led efforts to update privacy laws in the 21st Century, they may be the best venue for adjusting laws to confront novel concerns raised in the Age of AI. Platforms may also be a promising target of advocate’s fury–by altering their designs, they can afford targets of deepfakes with more control over how, when, and by whom they have their likenesses shared over social media.