Home

Donate

Poll: Vast Majority of US Voters Agree Individuals and Platforms Should be Held Accountable for Sexually Explicit Digital Forgeries

Kaylee Williams / Sep 16, 2024

On Thursday, Sept. 12, the White House announced that several leading AI companies — including Adobe, Anthropic, Microsoft, and OpenAI — have agreed to a series of voluntary commitments to implement safety measures intended to slow the spread of AI-generated image-based sexual abuse (IBSA).

“Image-based sexual abuse—both non-consensual intimate images (NCII) of adults and child sexual abuse material (CSAM), including AI-generated images—has skyrocketed,” the announcement reads, “disproportionately targeting women, children, and LGBTQI+ people, and emerging as one of the fastest growing harmful uses of AI to date.”

The voluntary commitments include “incorporating feedback loops and iterative stress-testing strategies…to guard against AI models outputting image-based sexual abuse,” and a promise to remove nude images from AI training datasets “when appropriate and depending on the purpose of the model.”

While the Biden administration lauded these commitments as “a step forward across industry to reduce the risk that AI tools will generate abusive images,” the results of a new, nationally representative Tech Policy Press/YouGov survey of registered voters in the US — completed only one day after the White House’s announcement — suggests that these efforts may not go far enough to meet the public’s demands for a safe and NCII-free internet. Tech Policy Press commissioned YouGov to conduct the poll of 1,136 voters fielded from September 11 to September 13, 2024.

When asked whether social media platforms should be required to “immediately remove nonconsensual AI-generated intimate imagery once it is reported,” more than three quarters of respondents (77 percent) answered in the affirmative. Just 19 percent of respondents answered, “I’m not sure,” and only 5 percent replied, “No.”

The proportion of “Yes” responses was even higher when the phrase “nonconsensual AI-generated intimate imagery” was replaced with the Cyber Civil Rights Initiative’s recommended legal term, “sexually explicit digital forgeries.” Of the respondents who received a version of the question including this specific term, 87 percent answered “Yes,” while 9 percent responded with “I’m not sure,” and less than 5 percent responded, “No.”

A similar, although slightly less stark pattern can be seen in replies to the question, “Do you think digital platforms (such as social media or websites) should be held legally accountable for failing to remove sexually explicit digital forgeries once notified?” The largest proportion of registered voters (61 percent) said they “Strongly agree,” while a quarter said that they “Somewhat agree.” Only 7 percent of the total respondents said they either somewhat or strongly disagreed.

The voters were notably more torn as to whether “individuals who create or share nonconsensual AI-generated intimate imagery” should face “criminal penalties” for doing so, although a majority (56 percent) of respondents still replied, “Yes.” Nearly a third of respondents said they were unsure about this issue, and 15 percent replied with the negative.

Perhaps unsurprisingly, women — who have been shown to be disproportionately targeted for image-based sexual abuse writ large — were slightly more likely than men on every measure to support legal liabilities and mandated removals of AI-generated intimate imagery. Interestingly, however, men were more likely than women (14 percent compared to 8 percent) to report that they were “personally aware of any incidents involving the sharing of nonconsensual AI-generated intimate imagery” among their friends, colleagues, children, or local communities.

These results suggest that a strong majority of US voters support legal — and potentially criminal — consequences for the individuals who distribute AI-generated intimate images without the consent of their subjects, as well as for the platform companies who fail to take them down after being notified by victims.

In order to meet these voters’ demands, the federal government will have to do more than simply pressure AI companies to act ethically and responsibly in their historically extremely rapid development of image generating large language models. There are currently several federal bills pertaining to AI-generated intimate images — including the DEFIANCE Act and the TAKE IT DOWN Act — which are currently awaiting votes in various congressional committees.

The results of this survey suggest that Americans largely support these measures, and are eagerly awaiting legislative action rather than merely corporate promises.

Authors

Kaylee Williams
Kaylee Williams is a PhD student at the Columbia Journalism School and a research associate at the International Center for Journalists. Her research specializes in technology-facilitated gender-based violence, with a particular emphasis on generative AI and non-consensual intimate imagery. Prior to...

Topics