AgeKey and the Potential Emergence of American-Style Age Verification
Meg Leta Jones, Clare Morell / Jan 8, 2026
Shutterstock
2026 is poised to be the year age verification changes the internet as we know it. OpenAI CEO Sam Altman recently announced that ChatGPT would pull the guardrails and treat “adult users like adults,” allowing erotica for verified users over 18. Age gating, long lamented as impossible and privacy invasive, will be here in a matter of months from the world’s most influential AI company. It’s not just OpenAI. Character.AI announced weeks later that it would begin age-gating its companion AI product for those under 18. And in just in November, a coalition of major tech companies launched the OpenAge Initiative, unveiling AgeKey– a privacy-preserving, reusable age verification system built to work across platforms. The infrastructure for an age-aware internet is no longer theoretical. It’s here.
But taking a closer look at these efforts, which are certainly steps in the right direction, reveals some critical gaps. AgeKey is voluntary and only the technical protocol for communicating age information. AgeKey is infrastructure that enables sites and apps to incorporate age verification if they choose to (or if laws require them to), but it doesn’t itself require or perform age verification. In other words, it can implement specific laws that require treating kids differently than adults online, but lawmakers still need to pass those laws.
The momentum for mandatory age verification is building rapidly. During the government shutdown, Senators Josh Hawley (R-MO) and Richard Blumenthal (D-CT) introduced a new bipartisan federal bill, the GUARD Act, that would restrict the use of AI companions only to adults. It’s one of dozens of state and federal bills proposed already this year that require verification systems in order to treat kids differently than adults online. With the increasingly available and advanced technological means to age verify, coupled with the Supreme Court’s recent decision in Free Speech Coalition v. Paxton, which states that “adults have no First Amendment right to avoid age verification,” the writing is on the wall: a wave of age verification requirements are headed for the US.
In many ways, online age verification measures are already here. Millions of Americans already use ID.me to verify military, teacher, or first responder status for discounts. Apple now lets users upload their driver’s license once to Apple Wallet, some even verified by state issuing authorities, then easily use it for age verification in apps. Some states, like Louisiana, even have their own digital ID programs that offer an anonymous verification process for accessing age-restricted sites or platforms. Anyone who gambles on DraftKings or orders alcohol online is familiar with age verification. For age estimation, which is a check to determine whether someone is over a required age or the age they claim to be, services like Yoti provide facial age analysis that doesn’t require matching to an existing image database. They simply estimate whether someone appears over 18 based on analysis of a camera scan and images are not stored. Age estimation is already used by Instagram when they detect potentially mis-aged users and by dating sites to prevent catfishing and child sexual abuse.
The OpenAge Initiative’s AgeKey demonstrates how seamlessly this technology can work at scale. Built on the same standards that secure passwordless logins, AgeKey allows users to verify their age once through their choice of method and then stores a cryptographic age signal on their own device. When they encounter age-restricted content, they simply authorize sharing this signal. The platform receives only confirmation that the user meets the age threshold, not their name, birthdate, address, or any identifying information.
The technical infrastructure for interoperability exists (or will soon), and major platforms already possess sophisticated verification technologies, but currently deploy them only selectively for various features. Unless companies are held accountable for age verification by law, and not just device companies but apps and websites as well, with clear standards that an age verification process must adhere to (offering multiple high quality options for users to choose from, of which a device signal could be one), parents can’t count on companies to voluntarily age gate their harmful products or for it to be effective. Laws should also require interoperability so that age signals stored on a device must be shared if requested by an app or website that is required to verify its users.
Why would OpenAI want to make sure its process is highly effective at identifying minors if there’s no liability for not doing so, and if more people, including minors, accessing erotica means more business and more profits for them? The Washington Post columnist Geoffrey Fowler, a dad himself, says it took him about five minutes to circumvent the new OpenAI controls that were voluntarily put into place after the company was sued by parents who lost a child using ChatGPT to suicide and described the disturbing details of the AI system’s interactions with their teen to the Senate. Fowler writes, “All I had to do was log out and create a new account on the same computer. (Smart kids already know this.)”
This is precisely why voluntary initiatives are still insufficient. The OpenAge Initiative reportedly has platforms like Meta, Discord, Snap, Quora, Tumblr, and Kick planning to implement AgeKey, which will recognize credentials from Apple, Google, and Samsung wallets, but “planning to implement” and unclear accountability among these major platforms leave children unprotected and parents confused. Many tech companies do not yet appear to be publicly signed onto the initiative, including Roblox and TikTok, and others are developing their own strategies for establishing user age. Can a teenager choose not to verify with AgeKey? Can a platform choose to ignore the age signal? Can it substitute its own age verification? Even if the technology works perfectly, the law needs to alter the incentive structure.
We’ve seen this before. In 2009 researchers proposed “Do Not Track,” a browser setting that would let users signal their privacy preferences to websites. By 2001, all major browsers had adopted DNT, but very few websites ever honored the signal because there was no legal obligation to do so. Yahoo! and Twitter initially said they would respect it, but later chose to ignore it. The most popular sites on the internet—from Google and Facebook to major adult sites—never honored it in the first place. The DNT signal was eventually abandoned by the standards bodies working on the technical details and browsers. Voluntary technical standards, no matter how well-designed or widely implemented, cannot substitute for legal mandates when financial incentives point in the opposite direction.
In a digital age where much of the online world is now appropriate only for adults to observe or operate—AI erotica, online gambling, dating apps, OnlyFans, porn sites, romantic AI companions, and more—we need an internet that allows platforms to distinguish between adults and children in a privacy and rights-respecting way. When devices, browsers, app stores, and individual platforms all have clear legal obligations to verify age and share verification signals, the system becomes far more robust against circumvention while providing a seamless user experience and effective protection for America’s children. American-style age verification should not be a tech-friendly acceptance of the status quo, but a large-scale effort to structure coordination among tech players to, once again, set a new standard for how the internet will work.
Authors

