Home

Donate

Inocencia en Juego: An Investigation into Groups Targeting Children on Facebook

Justin Hendrix, Ben Lennett / Feb 26, 2025

The below accompanies an analysis published in Tech Policy Press by Lara Putnam. It appears in tandem with a series of independent reports coordinated by the investigative journalism consortium El CLIP and published by El CLIP, Chequeado, Crónica Uno, El Espectador, and Factchequeado.

Today, Tech Policy Press joins the Latin American Center for Investigative Journalism (El CLIP) in publishing a report and series of articles documenting how adult users use public Facebook groups to identify and target accounts that indicate they are children for sexual exploitation.

The “Innocence at Risk (Inocencia en Juego)” project, coordinated by EL CLIP with participation from Chequeado, includes a report from Lara Putnam, a professor of Latin American history and Director of the Civic Resilience Initiative of the Institute for Cyber Law, Policy, and Security at the University of Pittsburgh, and independent reports from journalists across Latin America investigating a pattern of behavior on the platform’s public groups.

The reporting collaboration stems from Putnam's research. In 2022, Putnam published an analysis in Wired detailing public, Spanish-language Facebook groups where children were being targeted for sexual exploitation. Although many of those groups she identified were subsequently removed, in 2023, Putnam discovered a new trove of groups engaging in similar practices, which she documented in a report published by Tech Policy Press in January 2024.

In 2024, Putnam approached Tech Policy Press with additional research documenting how popular Facebook groups centered around Latin American and K-Pop celebrity fandoms, YouTube stars, and influencers become venues for child predation. Tech Policy Press and Putnam brought the report to Chequeado, which engaged with El CLIP to coordinate reporting with journalists in Colombia, Venezuela, and Argentina. Today, they published their reports in El CLIP, Chequeado, Crónica Uno, El Espectador, and Factchequeado. (Links to these stories are below.)

These reports reveal a range of phenomena, all of which fit a pattern. Adult users take advantage of groups that form around celebrities and influencers to engage with users who indicate they are children and often below the age of 13. They share sexually explicit material, sometimes including links to adult porn and apparent CSAM videos, and encourage accounts that indicate they belong to children to share information and engage in direct communication.

The reports by Putnam and the team of journalists in Latin America detail a variety of related phenomena:

  • Accounts, presumably adult users, use public posts to target and interact with accounts identifying themselves as children and entice those accounts to add them as a “friend” in order to enable private messaging. These posts often ask for photos and the age and country of a user.
  • Significant numbers of user accounts across multiple fan groups self-identify as children. For instance, in a “True Beauty” fan group with 135,000 members, posts asking users to disclose their photo, age, and country elicited responses from users self-identifying as young as 8. Although some of these users may be lying about their age to build relationships with other teen and pre-teen users, the frequency of the self-identification indicates there may be many children participating in these groups.
  • In teen and pre-teen-oriented groups, posts routinely include explicitly sexual photos, self-photos appearing to be uploaded by child users, posts trolling for sexual engagement, and posts including links to adult porn and apparent CSAM videos. Photographic and photorealistic child nudity and other graphic sexual content appear in posts on Facebook groups, sometimes described as “teen dating groups,” with some posts “age trawling” or seeking engagement from both teen and pre-teen accounts.
  • Through searches using CrowdTangle, a tool Meta previously offered researchers and journalists to search posts on its platform, journalists working on the project found thousands of posts in Facebook groups between March 2022 and August 2024, which followed these patterns. Searching by keywords in Spanish such as “age” and a list of numbers (corresponding to ages from seven to 17), “I show it to you,” “girls which are you,” “if you like older men,” “where would you kiss me,” or “I do pass it” (a code to inform that the user is willing to share explicit images and videos), yielded thousands of posts.
  • The investigation found that many of the most popular posts (in terms of comments) came from profiles that post in more than one of the analyzed groups, usually echoing the same messages in which they sought other group members to share their ages. In one case, a user who posted in several groups was also the administrator and moderator of some of them.

We presented a set of questions to Meta about these phenomena. A Meta spokesperson responded with a statement and a link to efforts the company takes to proactively address these and similar phenomena:

Child exploitation is a horrific crime. We work aggressively to fight it on and off our platforms and to support law enforcement in its efforts to arrest and prosecute the criminals behind it. Our policies prohibit child exploitation, inappropriate interactions with children, and the sexualization of minors; these rules apply globally, in different languages, including English and Spanish, and across each of our platforms. While predators constantly change their tactics to evade detection, our global teams and tools work to identify and quickly remove violating content.”

Meta points to detailed policies against child nudity, abuse, and exploitation, including against sharing or soliciting child exploitation imagery, inappropriate interactions with teens, and the explicit sexualization of minors. The platform says it has rules against more implicit sexualization and removes accounts dedicated to sharing what might appear to be otherwise benign images of minors when captions and comments are predominantly focused on children’s appearance. The company reports apparent instances of child sexual exploitation content to the National Center for Missing and Exploited Children (NCMEC), which engages with law enforcement in the US and countries around the world, and it uses technology to identify adults engaged in potentially suspicious behavior.

However, experts interviewed by Tech Policy Press said that the prevalence of these phenomena over the years suggests Meta is not doing enough to police its platforms.

“This investigation reveals a deeply troubling reality—one where, due to insufficient protections on digital platforms, millions of young users are openly exposed to predatory behaviors in public online spaces and then moved to more private spaces where the abuse happens,” said Marija Manojlovic, executive director of Safe Online, an organization that invests in child online safety solutions.

In response to the series of questions posed by Tech Policy Press and EL CLIP, a Meta spokesperson confirmed that “people under 13 are not allowed on Facebook” and added that the company uses “a range of tools to identify, review, and remove underage users,” and will delete an account if it can’t prove a user is 13 or older. Meta points to features it has implemented to limit direct messaging between adults and minors, but the material collected by Putnam suggests accounts impersonating celebrities use gamified techniques to try to convince accounts that identify as children to connect. After Tech Policy Press shared a list of groups with Meta, most appeared to be no longer available on the platform.

The tactics and behavior of users targeting children sometimes fall into a grey area for social media platforms. Though companies such as Meta have policies to prohibit such activity in place, it may be difficult to identify with automated systems given the often coded manner in which predatory users communicate and interact in groups. Furthermore, while policymakers are concerned with pressuring platforms to monitor, remove, and report CSAM content, these public groups often serve as a means to identify and target children and teens and then move those underage users to private channels for communication.

For instance, a 2023 Stanford Internet Observatory investigation found networks of underage sellers who produce and market self-generated CSAM using multiple social media and other platforms. Sellers market their content on Instagram and X, use direct messaging to facilitate the transaction, and then distribute the content via file-sharing services such as Dropbox or Mega.

David Thiel, former chief technologist at the Stanford Internet Observatory and a former engineer at Meta, said that in his research, he had observed a substantial amount of material similar to what Putnam collected in her report. This material may be difficult to moderate, said Thiel, because it seems to fit a “borderline content classification where it's obvious that the majority of people that are interacting with the posts are sexually interested in children, but the content itself is harder for them to moderate unless you can prove that the account is actually operated by a child under the age of 13.”

Andy Burrows, CEO of the Molly Rose Foundation, a UK charity that promotes youth mental health and suicide prevention in memory of Molly Russell, a London teen who took her own life after viewing images of self-harm on social media, said the phenomena described in Putnam’s report indicates Meta’s platforms are being used to target children systematically.

“The appalling reality is that Meta has allowed this pernicious trend of child abuse ‘breadcrumbing’—content that enables abusers to target victims and form networks with other offenders—to continue unchecked for many years,” said Burrows. He added that it is not surprising that this phenomenon appears to target children in the Spanish-language groups. “Offenders often have a sophisticated understanding of the weak links in platform moderation and readily use will languages and product surfaces which they know are easiest to exploit for criminal intent.”

Safe Online’s Manojlovic said making investments in trust and safety systems and tools, particularly for non-English-speaking regions and emerging markets, is essential to reducing the threat. “Too often, these regions lack the infrastructure and localized tools needed to detect and prevent harm,” she noted. Her organization has invested in “culturally and contextually specific solutions, such as CSAM classifiers that account for skin tone variations and grooming detection tools in Spanish.”

Ravi Iyer, managing director of the Neely Center for Ethical Leadership and Decision Making at the USC Marschall School of Business who previously led data science, research, and product teams across Facebook, pointed to fundamental design problems that allow predatory users to evade automated systems.

“The research clearly shows the limitations of drawing lines around what users can or cannot say to each other. There are an infinite set of ways for motivated predators to entice youth to add them as friends,” said Iyer. “While most people online are not predators, the design of these systems allows a small group of criminals to target others en masse. Attempts to develop new policies and procedures, without addressing these route design issues, will inevitably fail—no matter how many resources Meta throws at enforcing against the problem.”

Simply put, Iyer told Tech Policy Press that children should not be able to interact with strangers online in ways that would not be regarded as appropriate in the real world. He suggested that legislation will be necessary to force platforms to address fundamental design problems. “Clearly, online spaces need to be designed differently for children, which is what current child safety legislation is attempting to mandate,” Iyer said.

David Thiel offered that Facebook is likely using systems “that try to detect approximate age [of users] without actually using declared age as a strong signal” but that “it can often be hard to say… that a person is absolutely under 13,” or with enough certainty that the company would be willing to suspend an account or require a parent or guardian to submit an ID to verify it. He also pointed to differences across countries regarding the age for consent, which might make seem like there is a lot of unaddressed activity. Still, he acknowledges that even though the systems are there to address this problem, “they're more for discovering patterns of bad activity than just saying, you're allowed on [the] platform.”

Burrows, the CEO of the Molly Rose Foundation, worries that cuts to trust and safety teams and a change in content moderation priorities might mean these problems could get worse in the future.

“As the platform pivots away from proactive content moderation and sheds subject matter experts, many of us fear that child abuse on Facebook and Instagram will increase in volume and severity,” he said. “As Mark Zuckerberg chooses to de-emphasize safety while spams, scams, and CSAM are all left to proliferate on Meta's products, the risks facing children and young people appear more acute than ever.”

Manojlovic argues that content moderation alone will not adequately address these phenomena. “The fact that children as young as seven are routinely engaging with predators in these spaces highlights the urgent need for upstream interventions,” said Manojlovic. She pointed to the need for better tooling to permit children to “have safe, age-appropriate experiences.”

Katie Paul, director of the Tech Transparency Project, pointed out that Facebook plays an outsized role in the online life of many in Latin America. “Facebook and its sister app WhatsApp are two of the most widely used platforms in Latin America–and that’s by design,” she said. “The company’s Free Basics internet program, which partnered with mobile providers in Latin America to offer zero-rated data for Facebook-owned apps, cemented its role as critical infrastructure in the region. But Meta appears to have done too little to scale its moderation efforts, particularly in foreign languages, as it ramped up its user base in the region. Today, we see the very real consequences on real people—children—that come from an American tech corporation's unmitigated and unregulated growth.”

And while Mark Zuckerberg suggested in his January 7 announcement about changes to the company’s moderation practices that child safety will remain a priority, experts are concerned the company’s cuts in trust and safety and other changes in its moderation practices could indicate trouble ahead.

“Meta's bonfire of safety responsibilities raises serious questions about whether its already poor track record in tackling child abuse breadcrumbing is about to get even worse,” said Burrows. “As the platform pivots away from proactive content moderation and sheds subject matter experts, many of us fear that child abuse on Facebook and Instagram will increase in volume and severity.”

Report and articles

Research from Professor Lara Putnam

The evidence compiled and written into a report by Lara Putnam documents how these groups attempt to target children by asking participants to post their age and country of origin. It provides screenshots and other reports of the interactions in these groups to highlight specific types of interaction that seek to manipulate young users. It also presents quantitative data on the age ranges of accounts (based on the accounts' responses) interacting in these groups and the national origin of users.

Reporting from Latin America

Five journalists from Spanish-language news organizations in Latin America independently published articles in El CLIP, Chequeado, Crónica Uno, El Espectador, and Factchequeado investigating the issue in their respective countries.

Authors

Justin Hendrix
Justin Hendrix is CEO and Editor of Tech Policy Press, a nonprofit media venture concerned with the intersection of technology and democracy. Previously, he was Executive Director of NYC Media Lab. He spent over a decade at The Economist in roles including Vice President, Business Development & Inno...
Ben Lennett
Ben Lennett is managing editor for Tech Policy Press and a writer and researcher focused on understanding the impact of social media and digital platforms on democracy. He has worked in various research and advocacy roles for the past decade, including as the policy director for the Open Technology ...

Related

Latin America’s Children at Risk on Facebook: Predators Stalk Children in Celebrity Fan Groups

Topics