Home

To Protect Kids Online, Policymakers Must First Determine Who is a Kid

Matt Perault, J. Scott Babwah Brennen / Jul 5, 2023

Scott Babwah Brennen is the head of online expression policy at the Center on Technology Policy at UNC-Chapel Hill, and Matt Perault is the Center’s director.

When the U.S. House of Representatives Committee on Energy and Commerce recently held a hearing with TikTok CEO Shou Zi Chew, Congressman Buddy Carter (R-GA) asked how the app determines the ages of its users.

Chew responded by describing the app's inferential system that analyzes users' public posts to see if their content matches the age users claim to be. Before he could finish, Rep. Carter interrupted, exclaiming “That’s creepy!”

The exchange highlighted a tension in the emerging policy debates in online child safety: to protect children, you first must know who is a child. But determining who is a child online not only means ramping up surveillance on everyone, it means introducing new security risks, equity concerns, and usability issues.

The safety of children online has become perhaps the most pressing concern in technology regulation. Federal and state legislators are considering dozens of new bills addressing children’s online safety. This year, Utah, Arkansas, and Louisiana have all passed laws that require children under 18 to have parental consent to have a social media account, requiring platforms to verify the ages of all users. Proposed federal legislation, including the Social Media Child Protection Act, the Making Age Verification Technology Uniform, Robust, and Effective (MATURE) Act, and the Protecting Kids on Social Media Act would all restrict children on social media and require platforms to verify the ages of all users.

At the center of these proposals is the need to identify a user’s age--the lynchpin of establishing better protections for children online. Age verification—part of a set of age assurance methods--isn’t exactly a new concept: any Gen X or Millennial can tell you about the days of entering your age to view a movie or video game trailer in the early days of the internet. But what is new is the push to go beyond requiring users to submit their birthday and to employ more impermeable methods, such as submitting proof of age via a government-issued ID or use of artificial intelligence.

Yet, none of these methods is a silver bullet; each has costs and poses risks. Determining who is a child requires collecting sensitive information from all users—information that can then be stolen and exploited even years later. Requiring users to submit government IDs to verify their ages could limit access for those that may not have easy access to valid government IDs--whether they are undocumented immigrants, transgender people who no longer look like their official IDs, those who might have trouble going to state administrative centers, or other historically marginalized people. While AI-powered facial analysis may provide some help, these systems still have significant shortcomings in terms of reliability, equity, and privacy. For example, Yoti, one of the most widely used commercial AI-based age verification tools, is less accurate for darker skin tones across every age group.

While case law continues to evolve, in assessing and ultimately overturning both the Communications Decency Act (CDA) and the Children’s Online Protection Act (COPA), the courts expressed concern that mandatory age verification conflicted with users’ first amendment rights by unnecessarily limiting access to legal content. Ultimately, incorporating any age verification methods will require balancing tradeoffs between privacy, security, legality, and accuracy—among others issues.

Given this, how should regulators address the question of age verification in regulation? To help regulators make meaningful changes to protect the safety of children online, while balancing these tradeoffs, in a recent report, we developed a list of ten policy recommendations centered around three themes: balance, specificity and understanding.

Regulators can help balance some of the tradeoffs inherent in choosing verification tools by embracing cost benefit analysis (CBA). CBA is commonly used in the executive branch to analyze agency rules, but has been less frequently used by legislators. As the legal scholar Cass Sunstein has emphasized, CBA is not only about adding up financial costs, but about evaluating a range of costs, even those that may be hard to quantify, such as privacy.

Regulators can also provide platforms additional specificity in new and future age verification proposals. Regulators can play an important role helping platforms identify best practices for verifying user ages. We broadly support a risk-based approach to age verification, where companies have the latitude to match verification methods to the level of risk posed by a product or feature. However, we recognize that companies may struggle to accurately assess the risks of different features and products. The National Institute of Standards and Technology (NIST) is well equipped to lead the development of a resource that synthesizes what we know about the risks of different digital products and features.

There remains much we do not know regarding how age verification systems will work in practice at scale. Regulators can play an important role in helping us better understand the effectiveness and consequences of age assurance methods. In particular, regulators should adopt an experimental approach, establishing regulatory sandboxes, in which platforms can temporarily test out new methods or approaches in close consultation with regulators.

While well-intended, many policymakers underestimate the difficulty in safely, equitably, and legally implementing age verification. Yet, regulators can play an important role in helping platforms better balance the competing tradeoffs inherent in age verification.

Authors

Matt Perault
Matt Perault is the director of the Center on Technology Policy at UNC-Chapel Hill, a professor of the practice at UNC’s School of Information and Library Science, and a consultant on technology policy issues at Open Water Strategies. Matt previously led the Center on Science & Technology Policy at ...
J. Scott Babwah Brennen
Dr. J. Scott Babwah Brennen is the head of online expression policy at the Center on Technology Policy, where he leads the Center’s work on online expression, misinformation, and political advertising. Before joining the Center on Technology Policy, Scott was a senior policy associate at the Center ...

Topics