When Age Assurance Laws Meet Chatbots
Zeve Sanderson, J. Scott Babwah Brennen / Sep 5, 2025NYU’s Center on Technology Policy receives funding from foundations and tech companies, including Meta, a company referenced in this article.
In technology policy, there is often disagreement not only over solutions but also around the very nature of the problem. One notable exception is the importance of protecting children from exposure to sexually explicit or developmentally inappropriate material. Parents, policymakers from both major parties, and pornography companies themselves agree that minors should not have unfettered access to such content, even if they differ on how to achieve that goal.
That consensus is now colliding with a burgeoning reality: children are among the most frequent users of chatbots, which present news ways for children to access sexual content. Some prominent AI companies, such as OpenAI, prohibit their chatbots from creating sexual content for all users, though these guardrails have at-times failed. Others have taken a different approach. A recent Reuters investigation revealed that Meta’s internal policies permitted its AI chatbot to engage in “sensual” conversations with minors. Grok already offers what’s been called a “porn companion” and is reported to be enabling a new feature in October that will allow users to generate 6-second video clips, including with nudity. Outside of well-capitalized companies, an entire ecosystem of pornographic chatbots has emerged and grown in popularity, even as they haven’t been able to tap into significant venture investment.
Yet, chatbots do not easily slot into existing child online safety regulations, creating a gap between our regulatory frameworks and the products that children have access to. If age assurance as a policy is here to stay, how should legislators adjust existing rules to technologies that don’t fit the platform model they were built for?
Age assurance has a long and contested history. Efforts to regulate access to online pornography in the United States date back to the 1990s, when laws such as the Communications Decency Act were largely struck down by courts on First Amendment grounds.
For decades, mandatory age assurance was viewed as unconstitutional. That legal barrier has now shifted: in June, the Supreme Court allowed Texas’ age-assurance law for adult websites to take effect, essentially green-lighting a wave of regulations.
Today, more than 20 states have adopted similar requirements, most of which apply when a substantial portion of a site’s content — typically defined as one-third or more — is pornographic.
The United Kingdom’s new Online Safety Act (OSA) takes a different approach. Rather than setting percentage thresholds, it requires “highly effective” age assurance for services that publish or allow access to pornography and other material harmful to children, with regulators empowered to demand age-gating for specific parts of a service. Enforcement began in July as the UK regulator Ofcom opened its first investigations.
France and Germany offer still other models: France’s media regulator has authority to block noncompliant sites, while Germany has long required regulator-approved age-assurance systems under the Interstate Treaty on the Protection of Minors in the Media, known as the JMStV. The global trend is clear: age assurance has moved from contested concept to policy orthodoxy.
Yet these laws are designed for platforms that host pre-existing content. A pornographic website or a social media platform has an identifiable library of material, which can be classified, labeled and subjected to thresholds. Chatbots don’t work this way. They have no inventory of content that exists prior to user interaction. Their responses are stochastic, generated on the fly and vary according to input.
A child may encounter something inappropriate not because the platform “hosts” adult material but because the model generates it dynamically. This makes percentage thresholds meaningless and pre-screening impossible. Regulators themselves acknowledge the difficulty. Ofcom has clarified that the Online Safety Act applies to generative artificial intelligence and chatbots, but has also conceded that translating statutory duties into workable compliance rules for such products remains unsettled.
The risk is not simply that current laws fail to map onto chatbot dynamics, chill protected speech or endanger user privacy, but that poorly designed regimes could backfire. We already have evidence of unintended consequences of state online pornography laws. When Louisiana and other states adopted age-assurance laws, research showed that some users began employing VPNs to mask their location. Others shifted to platforms that were not compliant, potentially leading some minors to encounter more extreme material than they would have otherwise. The same substitution dynamic emerged in the UK in the month after OSA’s implementation, with non-compliant sites seeing large increases from British users.
Similar patterns could repeat with chatbots, but with greater consequences. Restrictions may push children toward offshore providers beyond regulatory jurisdiction, or toward open-source models they can run locally, which are improving rapidly and may limit potential safeguards. Unless regulators are careful, age-assurance laws could end up driving children to products with the least protection.
What, then, are the options? None are without trade-offs.
One approach would be to mandate the creation of tiered products, offering distinct versions of chatbots for different age groups — for example, a restricted “under-18” chatbot alongside a more permissive “adult” version. This would make clear what environment a child is in, but it risks driving minors toward less regulated alternatives if they perceive the restricted product as inferior. The Leading Ethical AI Development (LEAD) for Kids Act, which is currently progressing through the California legislature, would prohibit developers from creating "companion chatbots” for minors. As currently written, the law likely prohibits chatbots from engaging in any sort of “sexual relationship” with a minor.
A second approach is to filter outputs within a single product, adjusting the guardrails based on a user’s verified age. This avoids fragmenting the market into separate products, and no filtering system can be perfect when responses are probabilistic and when motivated users attempt to break safeguards.
A final option is to move toward risk-based regulation. Instead of defining obligations around specific content categories, regulators would focus on systemic oversight: requiring providers to test their models, publish transparency reports, and demonstrate how they mitigate foreseeable harms. This may prove more adaptable to generative products, but it remains underspecified and may be difficult to enforce in practice. This is an approach taken by Age Appropriate Design Code Laws that have been enacted in a handful of states.
In reality, the most likely path is a hybrid approach, layering parental controls, developer guardrails, and user-level assurance in an attempt to balance safety, privacy, and usability.
We remain skeptical of these regulatory regimes, both because of their potential chilling effects, the risks of behavioral adaptations and the privacy threats they introduce. But even if we succeed in drafting clear, evidence-based policies for chatbots, a deeper question remains.
As journalist Casey Newton has written, we cannot solve societal problems at the level of tech policy. Nowhere is that more true than here. The real question is how we help children grow into well-functioning adults capable of building healthy, meaningful relationships.
Chatbots may introduce novel and heightened risks — risks that warrant policies to address them — but the underlying issue is broader than digital technologies. Unfortunately, the United States has been moving in the wrong direction.
The defunding of the Institute of Education Sciences is one sign of the broader erosion of our capacity to prepare children for adulthood. Protecting them online is necessary, but without serious reinvestment in education, research, and social support, age assurance will remain an inadequate answer to a far larger problem.
Authors

