Taylor Swift Deepfakes Show What’s Coming Next In Gender and Tech – And Advocates Should Be Concerned
Ariana Aboulafia, Belle Torek / Jan 26, 2024Ariana Aboulafia and Belle Torek are former legal fellows to the Cyber Civil Rights Initiative. This piece is submitted in their personal capacity and does not reflect the views of their current employers.
For many young women, to watch Taylor Swift is to see someone living out our wildest dreams.
She is a pop superstar, to the point that she may have added $5 billion to the world economy. When faced with injustice, whether at the hands of an ultra-wealthy record executive, or a radio personality turned assailant, or (yes) even one of her exes, Swift has fought back – and, in doing so, has shown her fans how to navigate their way through a world that is often critical of, unkind to, and even unsafe for women.
Perhaps most impressively, Swift has done this all by remaining relatable – to her fans, the Swifties, the vulnerability of Swift’s earnest songwriting has allowed us to recognize her experiences as if they were our own. Her lyrics have become embedded into our lexicon, as we have used Fearless to make sense of schoolyard bullies; Red to process our first real heartbreaks; and Lover to see the injustice that comes with existing as a young woman in the workplace (no matter the industry).
This week, though, Swifties watched in horror as Taylor Swift lived through a nightmare, in becoming a victim of nonconsensual AI-generated deepfake pornography. This, too, is unfortunately relatable – much more so than it may initially appear to be.
Image-based sexual abuse is easier and more prevalent than ever
Nonconsensual distribution of intimate imagery (NDII, also sometimes termed ‘nonconsensual pornography,’ or colloquially referred to as ‘revenge porn’) is the act of sharing sexually explicit imagery, typically photos or videos, without the subject’s consent. While NDII has historically involved the sharing and/or distribution of authentic sexually explicit content, nowadays, this content can also be AI-generated, or a deepfake. Generative AI programs can generate sexually explicit deepfake content using nothing more than an image of a victim’s face. Reality Defender, an AI detection company, and a report from 404 Media found that this is likely what happened to Taylor Swift, indicating that the images may have been made in part using a free text to image AI generating tool.
These particular images, which depict Swift nude and in sexual scenarios at a Kansas City Chiefs game, circulated quickly throughout social media and received millions of views, although it was not immediately clear who made them, or even who first posted them. To a certain extent, these photos are Swift-specific, in that they were clearly created as misogynistic backlash to Swift’s presence this season at Chiefs games (which has angered many men, or as Taylor has referred to them, “dads, Brads and Chads.”) But, the creation and dissemination of deepfake pornography is far from an issue that solely impacts celebrities.
During our time as legal fellows to the Cyber Civil Rights Initiative, the leading organization focused on combating image-based sexual abuse and other gender-based harms related to technology, we saw firsthand the ways in which AI-generated deepfakes, particularly those depicting nonconsensual pornography or other sexually explicit images, harmed women and other marginalized groups – even years before the generative AI boom of late 2022.
However, since the advent of generative artificial intelligence and its subsequent availability to the public, deepfakes and other forms of AI-generated content have gone on to touch nearly every aspect of public and private life. And, while conversation surrounding deceptive deepfake technology and its resulting harms often arises in the context of the 2024 presidential election, election-related deepfakes are nowhere near reaching the prevalence of pornographic deepfakes. To date, estimates still indicate that between 96 percent and 98 percent of all deepfake videos online are pornographic, and that an estimated 99 percent of victims and survivors of targeted by deepfake pornography are women.
The sophistication of contemporary deepfake technology may lead one to believe that crafting such content demands abundant time, extensive resources, and technical expertise. Yet, the opposite is true: creating AI-generated deepfakes has become remarkably quick, accessible, and inexpensive. This is reflected in the rate at which deepfake pornographic content proliferates online: according to one study, the total number of deepfake videos online in 2023 represented a 550% increase over the 2019 figure. Further, the same study estimated that one in three deepfake tools allows its users to create deepfake pornography, and that it takes under half an hour to create a 60-second deepfake pornographic video of anyone with just one clear image. And, it is becoming increasingly difficult to distinguish authentic content from its AI-generated counterpart with the naked eye alone – this will only get worse as generative AI improves.
While legislators, companies, and even some advocates may wish to believe that any tangible harms of AI are firmly tomorrow’s problems, the fact is that these harms are nothing new – and that they are here, now, and impacting people’s lives. Here, not for the first time, Taylor Swift can show us what that impact might look like.
Of course, Taylor Swift’s experience with deepfakes is extraordinary because, well, she’s Taylor Swift. In fact, this isn’t even the first time this month that Swift has been the unwilling subject of a deepfake AI controversy. Two weeks ago, a video featuring an AI-generated version of Swift’s voice and likeness, which showed her supposedly promising to give away 3,000 Le Creuset sets to Swifties, went viral, and may have led to some sad and scammed Swifties. This deceptive marketing tactic likely catapulted to virality for two reasons: it capitalized on Swift’s celebrity status to attract one of the world’s most dedicated fan bases, and it ingeniously combined elements of reality (Swift’s well-known appreciation for Le Creuset products) with AI-generated deception. And, while this is clearly unethical (and potentially illegal), it appears that Swift largely chose to shake it off rather than publicly comment – this incident did not produce even close to the same level of response, from Swifties nor from Swift herself, as the deepfake pornographic images did.
Indeed, within moments of the release of those images, hundreds of fans shared their outrage on Swift’s behalf, with “Protect Taylor Swift” almost immediately trending on X (formerly Twitter). The photos initially evaded whatever content moderation systems remain in place on X under Elon Musk’s leadership, and remained on the site for approximately 17 hours. Many of the photos were eventually removed, largely as a result of a mass-reporting campaign, led by Swifties.
The law offers most women little recourse
Swift herself is reportedly “furious” over the images, and has stated that she is considering legal action. If Swift does want to take legal action, it’s fair to say that she has sufficient resources at her disposal to pursue any recovery possible. With no federal laws currently in place prohibiting the creation or dissemination of deepfake pornography, legal recourse may be difficult – but, renowned attorney and subject-matter expert Carrie Goldberg has posted potential arguments on X, and these sorts of legal theories would certainly be available to Swift’s counsel if they chose to use them. But, even if legal redress fails, the fact remains that Swift has also spent many years working with a mastermind of public relations to meticulously craft a public persona, one that these photos do not necessarily fit into.
As a result of all of these factors – her army of devoted fans, her ability to afford a top-notch legal team, and her relationships with the most skilled PR reps in the business – it is unlikely that these images will actually wind up causing Swift any significant damage to her (big) reputation.
But, what about the rest of us?
That is, if one of the most powerful women in the world cannot stop these sorts of abuses from occurring in the first place, who can? Perhaps more pertinently, though – while Taylor Swift’s billionaire status, access to resources, and overall privilege as a white and cisgender woman may shield her from the worst harms caused by nonconsensual deepfake pornography and other AI-generated abuses, what will happen to those of us who do not share those attributes? We know all too well that women who are queer, transgender, low-income, people of color, disabled (or some combination thereof) are having their very lives and livelihoods destroyed because of this technology and its unchecked proliferation, which occurs as a result of failed content moderation policies and minimal regulation. It is unrealistic to expect them simply to tolerate it.
As Dr. Mary Anne Franks, President and Legislative & Tech Policy Director of the Cyber Civil Rights Initiative, said to us via text message, “No matter how rich, powerful, or beloved, no woman is immune from the destructive and dehumanizing effects of misogyny. Deepfake porn is an assault on women’s autonomy and expression, and should be unacceptable in any society that claims to value either.”
Of course, nonconsensual pornography is far from the only harm that women experience on a daily basis. It must also be considered in context with all of the other harms that women regularly experience, with little help from law or regulation. Here, again, looking at Swift as an example is illuminative – on the very same day that these graphic images went viral on X, Swift’s alleged stalker was arrested for the third time in one week. Again, if Taylor Swift, who has access to bodyguards and security, is experiencing this frustrating lack of protection from our legal system, it is vital to consider the experience of an average woman working within these structures. When it comes to curbing gender-based harms – including stalking (which often includes technology via cyberstalking) and especially AI-generated abuses like deepfake pornography – legislators, regulators, and companies may as well be telling women: you’re on your own, kid. But, it does not have to be this way.
Swifties turn to tech policy?
While protections for victims and survivors of stalking likely will not improve anytime soon (partially thanks to a recent Supreme Court decision), we are at a uniquely ripe time for regulation of AI – and again, here, Taylor (and the Swifties) could lead the way. While celebrities have been victims and survivors of nonconsensual deepfake pornography in the past, none of those images have had quite this extensive of a reach, or as vehement of a response from fans.
This incident could both serve to raise awareness of the harms of these technologies, as well as to become a catalyst for regulators to speak now and take action to protect women, girls, and other marginalized groups from those harms. While the Biden-Harris Administration’s executive order on AI has set into motion the development of guidance for content authentication and the clear watermarking of AI-generated content, many advocates have stressed that these schemes are “unlikely to work” on their own to meaningfully mitigate the sorts of harms that can ensue when deceptive AI content proliferates online.
Furthermore, Swifties – Taylor’s fearless and staunchly loyal fanbase – can play a role in pushing for regulation of these technologies, and others that harm marginalized groups. Their power and ability to organize (particularly in digital spaces) should not be ignored or underestimated; this is, after all, the very same fanbase that sued Ticketmaster after many of them were unable to secure tickets to the Eras tour. Some X users are already predicting that Taylor Swift, and her fans, could be the thing that leads to “heavy regulation of AI,” and expressing gratitude for it.
Long story short, the existing industry and regulatory system is insufficient to protect people from the harms of these technologies – and these problems are only going to get worse, as algorithmic systems become more advanced. But, with these sorts of problems come opportunities for progress, as well. The main question that remains: are we ready for it?