Home

Meta Oversight Board Announces Plans to Rule on Sexually Explicit Deepfakes of Public Figures

Kaylee Williams / Apr 19, 2024

Meta’s quasi-independent Oversight Board announced plans Tuesday to assess the company’s approach to explicit, AI-generated images of female public figures on Facebook and Instagram.

In the last year, the two Meta-owned social media platforms, which boast 3 billion and 2 billion users respectively, have been flooded with sexually explicit and suggestive AI-generated content, thanks in part to the growing number of publicly available AI image generators and “nudify” apps, which allow anyone to generate realistic, sexualized images without the knowledge or consent of the person(s) they depict.

Oversight Board Co-Chair Helle Thorning-Schmidt said in a written statement that the investigation—which will center around two specific content decisions made by the Meta regarding explicit AI images of two unnamed public figures—will explore whether the company’s policies and enforcement practices are “effective” at addressing this growing problem.

"Deepfake pornography is a growing cause of gender-based harassment online and is increasingly used to target, silence and intimidate women – both on and offline. Multiple studies show that deepfake pornography overwhelmingly targets women,” Thorning-Schmidt’s statement reads.

Nicknamed the “Supreme Court” of Facebook at the time of its launch, the Oversight Board is funded by Meta, but operates largely independently to review content decisions “to see if the company acted in line with its policies, values, and human rights commitments,” according to the organization’s website. The board reserves the power to overturn company decisions with regard to content moderation, and hand down non-binding “policy advisory opinions” with regard to broader ethical dilemmas.

A representative for the board explained via email that the organization has chosen to conceal the identities of the two public figures whose emblematic cases sparked the inquiry—one woman in India, and another in the United States—in order “to prevent further harm or risk of gender-based harassment.”

In the first case, the nude, AI-generated image was initially reported to Meta after being posted on Instagram, but the image was allowed to remain on the platform after the report was automatically closed “because it was not reviewed within 48 hours.” It’s unclear why the report went unreviewed by Meta for so long, but the image was eventually taken down after the Oversight Board notified Meta that it would be reviewing this particular decision.

In the second case, the explicit image was posted to a Facebook group dedicated to AI-generated content after it had been previously removed for violating Facebook’s policy against “Bullying and Harassment,” specifically with regard to “derogatory sexualized photoshop or drawings.” The image was taken down a second time, prompting the user who posted it to appeal the decision to Meta, and ultimately escalate the issue to the Oversight Board, presumably arguing that it should be allowed to stay on the platform.

Over the next two weeks, the Oversight Board will be accepting public comments with regard to these cases, as well as the wider status of explicit AI images on Meta platforms. After the public comment period has closed, the board will ultimately rule on these specific cases, and decide whether the posts in question—and presumably others like them—“should be allowed on Instagram or Facebook.”

“We know that Meta is quicker and more effective at moderating content in some markets and languages than others. By taking one case from the US and one from India, we want to look at whether Meta is protecting all women globally in a fair way,” said Thorning-Schmidt.

The board’s announcement does not address whether the organization’s ruling and subsequent recommendations will pertain to instances of explicit AI imagery targeting private citizens, who—unlike many public figures—do not have the means or the public notoriety to combat this sort of abuse.

Studies have shown that technology-facilitated sexual violence, such as the nonconsensual publication of explicit images, can have lasting and devastating effects on a victim’s health, career prospects, personal relationships, and financial well-being, among other aspects of her life. And many experts and advocates for victims of image-based sexual abuse, such as disinformation researcher Nina Jankowicz, have pointed out that for many women living under especially conservative or paternalistic regimes, the consequences of this sort of reputational damage could even prove deadly.

For example, Oversight Board member Nighat Dad, a human-rights lawyer in Pakistan, was recently quoted in a Rolling Stone article saying that in some parts of the world, AI image-based blackmail and other forms of technology-facilitated abuse have already led to honor killings and suicides.

However, the announcement of the investigation comes less than a week after Meta began testing new direct messaging features aimed at protecting individual users (and especially teens) from “sextortion scams” and other forms of image-based abuse.

At the very least, both moves suggest that Meta and the Oversight Board are thinking carefully about the company’s long-term strategy for handling image-based sexual abuse, and likely hoping to avoid the criticism that befell X back in January, after explicit AI images of Taylor Swift forced the platform to temporarily block all searches for the singer/songwriter’s name as the company scrambled to mitigate their spread.

Authors

Kaylee Williams
Kaylee Williams is a Ph.D. student at the Columbia Journalism School. Her research focuses on the impacts of journalism, mass media, and technology on American politics. Before pursuing her Ph.D., Kaylee served as a research fellow at Harvard University’s Shorenstein Center for Media, Politics & Pub...

Topics