Home

Donate
Analysis

New Study Examines Features and Policies for 29 AI ‘Undressing’ Apps

Tim Bernard / Dec 23, 2025

With advances in generative AI over the last few years, many harms have become apparent, from the environmental impact of model training to disinformation to job losses. One of the most tangible and widespread negative impacts has been the proliferation of undressing (or “nudify”) apps, which enable unskilled users to generate non-consensual intimate images (NCII) of victims rapidly, cheaply, and often with just one photo of the subject’s face.

A new study published in the journal Violence Against Women by Kaylee Williams, a PhD candidate at Columbia Journalism School and an occasional contributor to Tech Policy Press, examines a set of these apps through the lens of tech-facilitated gender-based violence (TFGBV), deriving data with which to place the phenomenon in its sociological context, and specifically how they “contribute to broader dynamics of gendered power.”

Williams’s study began with identifying a set of apps to study. From an initial 101 candidates, she eliminated those that were defunct, duplicates, in Russian, or did not appear to “primarily exist for the creation of NCII.” Following ethical best practices and legal risk mitigations, those that did not require users or subjects to be over the age of majority were also removed. (The use of undressing apps by and using images of minors is a subject of great importance, but necessarily was not the focus of this paper.) 29 apps remained in the set for content analysis.

These apps were examined as to their affordances, marketing material, and policies, “while focusing on the social dynamics that appeared to influence their design, use, and impact.” Findings including the following highlights:

  • Gender. The models underlying the apps seem to have been primarily trained on images of women. The apps were clearly directed at men, and all were capable of creating sexually explicit images of women, while only 12 were able to create those of men. When an image of a man is submitted to an app that does not support this functionality, it generates an output with female sex characteristics. Tellingly, one of the apps labeled the feature for altering images of men “gay section,” reflecting the assumption that the intended user base is exclusively male. “The sexualization of women,” writes Williams, “is a core function of these platforms, while similar treatment of men is considered secondary or optional.”
  • Age and race. Marketing images for the products showed almost all white and Asian women, who appeared to be in their late teens or early twenties.
  • Motivation. Marketing language on the apps reflected themes of sexual desire and fantasy, as one might expect, but also included references to “creative expression and storytelling,” and to producing works to participate in a community of creators. Williams links this to earlier work on image-based sexual abuse that suggests that the abusers “often do not recognize their behavior as innately harmful or a violation of the subject's privacy.”
  • Privacy. All but one of the apps had a privacy policy that “typically assure[s] users that the pornographic images they generate are either (a) digitally stored on the platform's servers only for a limited time, or (b) automatically deleted after being downloaded by the user.” This underscores the awareness that the material generated may be compromising to the user, whether legally or personally (and is somewhat ironic given that the standard app use-case intrinsically trespasses on subjects’ privacy).
  • Content takedown. Almost half of the sites offered some sort of takedown procedures, in some cases explicitly for images generated without the subject’s consent.
  • Free or cheap. Despite almost entirely being supported by subscriptions and other user purchases (as far as was apparent), 26 of the 29 apps offered a free trial or a free tier. Six of the apps offered one-off microtransaction payments for individual images or small bundles of credits. This demonstrates how easily accessible these services are.
  • Referral programs. Around two thirds of the apps offered discounts or credits for recruiting others. “These programs serve as an incentive for creators of NCII to introduce others to the practice, and thereby contribute to the proliferation of a rapidly growing, and demonstrably harmful phenomenon.”

These findings support an overall picture of an environment that furthers the objectification of women. The control of the user and the lack of consent of the subject are both explicitly promoted. “These messages trivialize and even glorify the violation of subjects’ privacy, autonomy, and dignity, while framing women as objects existing primarily for male sexual pleasure,” writes Williams. The apps transform “women's likenesses into data that can be stripped of context, agency, and humanity.” One platform even boasts, in these words, that their offering “effectively subjects female bodies to the male gaze.”

“Ultimately,” writes Williams, “these findings demonstrate that undressing apps are not simply innovations in AI-powered image generation, but vehicles of systemic GBV (Gender Based Violence), embedded in a broader digital ecosystem, which objectifies women and commodifies their exploitation.”

These technologies are presented for little or no financial cost, encourage the growth of a community of users, and attempt to insulate them from any consequences, adding to the normalization of this posture towards women. This “pervasive normalization of GBV” creates a chilling effect, discouraging women from participating in public life (online or in general) due to the fear of having their image subjected to these processes.

Williams ends with reflections on the policy implications of undressing apps, noting that regulations prohibiting the creation, facilitation, possession and distribution of NCII, including AI-generated NCII, is uneven across jurisdictions. However, even where they exist, “consistent enforcement remains nearly impossible, due to the relative anonymity of deepfake creators, the decentralized nature of these apps, and the far-reaching legal immunity that American platform companies are afforded as it relates to harmful content published by their users.” Although recent regulations—like the TAKE IT DOWN Act, passed in the US earlier this year—set out formal requirements, how they will be enforced and how issues of liability and jurisdiction will be handled remains unproven in courts and in real-world practice, Williams says.

Furthermore, Williams explains, appropriate legislative solutions are far from simple. This study demonstrates how undressing apps are deeply embedded in a culture of TFGBV, suggesting that combatting it effectively “requires moving beyond takedown obligations to interrogate how information infrastructures, financial incentives, and cultural framings actively encourage the objectification and exploitation of women.”

Authors

Tim Bernard
Tim Bernard is a tech policy analyst and writer, specializing in trust & safety and content moderation. He completed an MBA at Cornell Tech and previously led the content moderation team at Seeking Alpha, as well as working in various capacities in the education sector. His prior academic work inclu...

Related

Analysis
How Might Trump’s AI Executive Order Impact State Laws Regulating Nonconsensual Deepfakes?December 19, 2025
News
As Deepfake Bans Take Effect, Child Offenders Remain a Stumbling BlockJune 24, 2025

Topics