Home

New Research Highlights X's Failures in Removing Non-Consensual Intimate Media

Prithvi Iyer / Oct 11, 2024

X owner Elon Musk, May 2023. Shutterstock

In a new paper, "Reporting Non-Consensual Intimate Media: An Audit Study of Deepfakes," a group of researchers set out to audit the amount of time it takes X (formerly Twitter) to take down non-consensual intimate media (NCIM), which is defined in the paper as “revenge pornography” and sexualized or nude “deepfakes.” The study looked at two mechanisms under which users can seek content takedowns: copyright infringement claims under the Digital Millennium Copyright Act (DMCA) and X’s “non-consensual nudity” policy. The study found that “[t]he copyright condition resulted in successful image removal within 25 hours for all images (100% removal rate), while non-consensual nudity reports resulted in no image removal for over three weeks (0% removal rate).”

“The results of this really important research are both predictable and disappointing,” noted Dr. Mary Anne Franks, President and Legislative & Tech Policy Director at the Cyber Civil Rights Initiative (CCRI). Dr. Franks’ colleague, CCRI Head of Research, Asia Eaton, is one of the study’s coauthors.

One in 8 US adults have had their intimate images shared without their consent, and survivors of such abuse report experiencing mental distress and social withdrawal. Generative AI tools have added to the problem by making it easier to create non-consensual intimate media (NCIM) at scale. In some instances, the circulation of this material has led to victims taking their own lives.

The research result is perhaps unsurprising when you consider the policy context in which platforms handle copyright claims. The DMCA, which passed in 1998, is a well-established law that requires online platforms, such as social media, to remove infringing content when notified of a copyright claim. In exchange for cooperating with the requests from rights holders, the DMCA established a safe harbor for online platforms that shields them from liability when users post content that infringes on an individual’s or company's copyright.

Internet platforms have established robust notice and takedown policies and systems to comply with the law. In contrast, there is no federal statute requiring the removal of NCIM, though every state except South Carolina has passed a law prohibiting the distribution of non-consensual intimate images. But that doesn’t make it easy to demand such material be removed from a social media platform like X. “In essence, property rights matter, privacy rights don’t. Or rather, women’s privacy rights don’t matter unless they can be framed as property rights,” said Dr. Franks.

The authors of the paper say the current legal terrain makes it difficult for victims to seek takedowns:

Although 4[9] U.S. states and Washington D.C. have enacted legislation addressing NCIM, these laws have significant limitations that hinder justice for victim-survivors. First, legislation is jurisdiction-specific, complicating the prosecution of online crimes when the perpetrator resides in another state or country. Second, legislation primarily focuses on punitive actions for the perpetrator (when able to be identified), and offers little recourse for removing harmful content from online platforms—a critical need for victim-survivors. In other words, while legal action under current state laws have made progress in holding perpetrators accountable, they have failed to support victim-survivors with legislation that empowers them to remove NCIM depicting them from the internet.

To audit X’s reporting mechanisms, researchers created 50 AI-generated nude images of fictional personas and posted them using ten “poster” accounts that were created within “a three-day period in 2024 to ensure they were similar in account age.” These images were reported under two conditions: half using X's non-consensual nudity reporting mechanism and the other half using the DMCA process. The researchers measured how quickly X responded to the reports and removed the images over a 21-day period. The researchers chose to wait for a “sufficiently long time between posting and reporting, to allow X to potentially detect or remove this content without manual reporting.”

To assess the efficacy of each method, the researchers used three metrics:

  1. Whether the content was removed within three weeks after the last report was made.
  2. The number of hours it took for the content to be removed after the initial report.
  3. The number of views and engagement the content received. This is especially important because more content exposure could increase the likelihood that the content would be seen by someone in the victim’s social circle.

Ultimately, “images reported for copyright infringement under the DMCA were removed within a day, while identical images reported under X’s privacy policy remained on the platform for over three weeks.” In order to get platforms to more expeditiously takedown NCIM content, the researchers argue that state and federal laws need to make the platforms liable for failing to remove NCIM in a timely manner: “protecting intimate privacy requires a shift from reliance on platform goodwill to enforceable legal standards.”

Dr. Franks has a clear idea of what those standards should look like, which she shared in a comment to Tech Policy Press:

The most significant problem with the majority of current state laws, in my view, is that they misdefine the offense as a form of harassment rather than a violation of privacy. Under those laws, perpetrators who don’t act with the motive of causing harm to the victim - for example, those who disclose the imagery to make money, or to gain social status, or to provide ‘entertainment’ – can’t be prosecuted. That’s nearly 80% of self-acknowledged perpetrators, according to CCRI’s research. The study’s authors emphasize that the laws don’t empower victims to remove the imagery, which is true, but that has much more to do with restraints imposed by the interpretation of other laws, namely Section 230 and the First Amendment, than deficiencies in these laws themselves.

According to Dr. Franks, getting a platform like X to take this issue seriously requires two changes to the current legal framework. First, Congress needs to pass a federal law like the Stopping Harmful Image Exploitation and Limiting Distribution Act (SHIELD Act), which passed in the Senate but has yet to advance in the House. Second, Congress must reform Section 230, which provides broad liability protections for tech companies. Until then, expect results like those revealed in this research to persist.

Authors

Prithvi Iyer
Prithvi Iyer is a Program Manager at Tech Policy Press. He completed a masters of Global Affairs from the University of Notre Dame where he also served as Assistant Director of the Peacetech and Polarization Lab. Prior to his graduate studies, he worked as a research assistant for the Observer Resea...

Topics