Home

Donate

Age Verification in the United States: Insights from the Open Technology Institute

Tim Bernard / May 9, 2024

Shutterstock

Over the last few years, dozens of pieces of legislation requiring age assurance measures, whether explicitly or implicitly, have been passed, both in US states and around the world Federal bills in the US including such measures have gained significant momentum, but their passage is still not a done deal. Developments in the last few months, as detailed in a timeline from the CEO of a leading vendor, have pushed the conversation about age assurance forward apace.

Into this environment, New America’s Open Technology Institute (OTI) released a new report on age verification, along with its context in the broader category of age assurance. It is by no means the first report on the topic. The main trade-offs of age assurance (such as accuracy, data privacy and security, equity, free expression, cost and convenience) are well-known to industry, regulators, and within the tech policy community. Earlier reports from government regulators and NGOs include:

As this list suggests, much of this work has been conducted outside the US, and so OTI’s report is notable firstly for its focus on the US context. It is especially relevant in light of the Supreme Court’s perhaps concerning refusal to block a Texas statute requiring pornographic websites pending the appeal of a Fifth Circuit ruling upholding the age verifcation component of the law. This is despite some strikingly similar precedents from past decades that had been widely understood to bode poorly for the prospects of almost any legislation requiring age verification.

One other interesting feature of the OTI report is the space given to describing alternative measures to protect children from harm that are currently in use by major platforms—absent direct legal mandates—alongside formal age assurance measures. The report refocuses the reader on root policy goals, rather than merely reviewing options for age verification. In line with this, the first of the report’s recommendations is the pursuit of alternatives to verifying the ages of users:

“Using a mix of alternative methods to improve youth—and general user—safety online may more effectively and directly address concerns about access to age-inappropriate materials and the negative impact of online spaces. Ultimately, age verification is no substitute for privacy protections and increased user transparency and control.”

Following the report’s publication, OTI hosted an online panel discussion on May 1, where several experts reflected on the implications of age verification laws and methods. Just as the report was relatively skeptical of age verification solutions in comparison with some of its peers, the panelists also expressed serious concerns about the current state of US policy regarding age verification and assurance.

A transcript of the panel, lightly edited for clarity and concision, can be found below. It began with an overview of the report by its lead author, Sarah Forland, followed by a panel discussion with three experts. These were some key themes highlighted by each of the panelists:

  • danah boyd: The youth mental health crisis is a broad social problem and cannot be remedied with a narrow technical solution. To alleviate mental health harms to children, policymakers and advocates should favor efforts to guarantee their rights; provide them with mental and social support; and intentionally socialize them into the healthy use of technology.
  • Ashley Johnson: These laws are poor substitutes for foundational federal internet legislation, such as data privacy and security laws and online harms mitigations for users of all ages. Age-gating and parental consent laws may lead to inequities for the marginalized, such as those without IDs or youth with unsupportive or abusive parents.
  • David Sullivan: Proponents (and critics) of age assurance need to be more detailed and specific regarding targeted harms and proposed assurance methods. A key approach to the issue at this juncture is multilateral technical standards-setting with the participation of all relevant stakeholders, including the platforms.

Transcript

Prem Trivedi:

My name is Prem Trivedi, I'm the policy director of the Open Technology Institute at New America. We're delighted to have you join us this Wednesday for a timely discussion on age verification and whether it can keep kids safer online.

Quick note about New America and the Open Technology Institute: New America is dedicated to renewing the promise of America by continuing to realize our nation's highest ideals, honestly confronting the challenges that are posed by rapid technological and social changes and seizing the opportunities that those changes create. The Open Technology Institute, or OTI, is part of New America's group of technology and democracy programs, and OTI's mission is to advocate for policy and technical solutions that promote equitable access to digital technologies and their benefits.

The online safety of youth, which is a category that ranges from young adults to young children, is a topic that, as many of you are well aware, are dominating news headlines, legislative discussions, and dinner table conversations these days. And that's true not just here in the United States, but also globally. So to give you just a couple of examples, just last month, Florida passed a law that requires social media companies to implement age verification requirements for users who are 13 and younger. And just today, I believe, Australia's eSafety Commissioner has announced funding of almost $7 million for the development of an age verification pilot—very fresh developments that illustrate the application of age verification requirements, especially for social media sites. This is part of a broader trend, and so today, we are here to talk about how we should think about that trend, what we should unpack about it.

Today's panel discussion and a report that OTI recently released focus on helping a range of stakeholders unpack the rise of age verification mandates. And our focus at OTI has been on explaining how these mandates work in practice, what trade-offs they entail, and what implications they have for all users’ safety and rights. The way we've approached our report reflects OTI's long-standing approach to demystifying technology and equipping policymakers and stakeholders with the context that they need to be able to assess different technical and policy interventions.

And before we dive into the panel discussion, we're very happy to have the report's lead drafter, Sarah Forland, walk you through the reasons why we wrote this report and explain some of its key findings. So, Sarah, congratulations again on the report's publication, which reflects a tremendous amount of work, and turning things over to you.

Sarah Forland:

Thanks, Prem. I'm excited to hear from all of our panelists today, I'm sure it's going to be a great event, but first, I'll offer a quick preview of the OTI report that was published last week that explains some of the context and key concepts related to online age verification and its potential impact on all internet users.

In recent years, we've seen growing concerns about the well-being of young people online. Legislators both here in the US and abroad are grappling with how to combat harms that are contributing to young people's mental health issues and negative experiences on the web. As of 2024, we have 12 states that have passed age verification legislation as part of this effort to improve the safety of young people online. And while most of these passed mandates and those that are still pending target access to adult content that is age-inappropriate and sales that are age-gated in real-life, some states are going even further to apply age verification requirements to social media companies. And what seems like a straightforward solution is actually quite complex and can have implications for user rights, privacy, and security.

So to break down these concerns, let's start with the basics of what is age verification? It's a subset practice of age assurance, which is an umbrella term used to describe how websites vet if a user is old enough or not to access it, and this can be determined by that company's own policies, terms and conditions, or by legal age limits. Different age assurance practices include things like age-gating or screening, so this is when you check a box saying, "Yes, I am 18 plus," or you input your date of birth into a pop-up window before you can access a website. Age estimation is when a platform or website estimates a user's age, and this can be done through a variety of ways, so maybe looking at your online profile and activity and friends to estimate your age or through facial analysis, so essentially saying, “Do you look old enough to access this website?”

Third-party verification is when a website trusts a third-party to verify a user's age. That can be done through an app, a device, an account, or through a hard identifier like a government-issued ID. So, for example, when you sign into a new website with your Gmail account, you may be signaling to that website that you're 13-plus because you've already attested to Google that you are so. And then there’s age verification, which is confirming whether a user is above a certain age or within an age range. And this is most commonly done right now by looking at a user's government-issued ID, accepting credit card information or using biometrics. And each of these practices have their own trade-offs for the level of certainty, accessibility, and impact on user rights, privacy, and security.

And I'm sure just by listening to these examples, you can see it's much easier to lie about your age during the [user attestation] age-gating process than it would be when a website asks for your ID, which offers a higher level of certainty, but maybe less accessible for folks who don't have the proper ID or don't want to disclose their personal data. And some of these practices, such as looking at someone’s online profile and friends to determine their age, may feel more invasive than sharing your credit card information, depending on who that person is.

These challenges come a bit clearer when we see how age verification is practiced and implemented in real life. Our report highlights several of the challenges that go along with requiring age verification. The first—maybe surprisingly—is technical immaturity, and it's this idea that we're still working on technology that can do quite a few things. So, one, accurately verify a user's age without, two, requiring additional disclosure of personal information or sensitive information, and, three, do all of this while protecting that user’s data privacy and security.

In the US, there are First Amendment concerns when we create barriers to accessing speech or exclude users, and then the nature of disclosing personal information to verify a user’s age creates some risk in and of itself as that information is collected, processed, and stored. There are also challenges when we think about the level of implementation for age verification. Stakeholders in this space, whether that's legislators or tech companies, have suggested that this can be done at a few different levels: the app store level, the individual website, at a device level, or even at the internet service provider level. And each of these comes with different costs, efficiency challenges, and risk to users. In addition, age verification requirements create challenges and costs and barriers to entry for startups and smaller operators. And finally, age verification isn’t a foolproof solution; you can still borrow someone else's device, use another person's credentials, or use VPNs to get around many of the age verification requirements.

As lawmakers continue to move forward with age verification mandates, OTI's report offers about five recommendations, which are to:

One: consider alternative solutions to age verification that may more effectively address concerns surrounding youth experiences online. Age verification requirements don’t address issues around mental health and well-being, they don’t protect user data, and they can easily be circumvented. So ultimately age verification is no substitute for comprehensive privacy protections and increased transparency controls for the platforms and websites that we use every day.

Second: in online spaces where age verification is absolutely necessary, processes should be designed to optimize user privacy and choice, so users can choose an age verification method that they feel both comfortable and safe with.

Third: some online platforms are moving ahead with different practices aimed at protecting youth online, such as limiting when hard identifiers or government IDs are requested. Creating age-specific features that tailor experiences to different age levels and expanding parental controls so parents can have a greater insight into what their kids do online. And all of these methods should be evaluated for both their benefits and their risks to ensure solutions offer users both transparency and agency over their online experiences.

Fourth: stakeholders should understand that all content-based restrictions will face constitutional challenges and can have unintended consequences on speech, specifically those for vulnerable and already marginalized communities.

And finally: we should invest in cross-sector research and collaboration to create standardized best practices and protocols that sync not only our technological approach, but our governance of these technologies as they’re implemented.

This is just a brief overview of OTI's new report, which you can find on New America's website. I'll turn it back over to Prem to introduce all of our great panelists who I'm sure will have more for us to consider when we’re thinking about age verification, its implementation, and impact on users. So, Prem, back over to you.

Prem Trivedi:

Thank you so much, Sarah, I really appreciate you running through that, and I think it really nicely sets the stage for the conversation today. Before we dive into that, we've got an excellent expert panel to help us talk through some of the most interesting and challenging social, legal, technical aspects of age verification. As Sarah says, this is a complicated issue and the terms don't always just mean one thing.

So, let me just jump right in with quick introductions of our panelists. They are easily discoverable online and you can read their illustrious bios there. I will do a short intro of each of them and we’ll get right into it. We're very happy to have with us today three panelists for this conversation. The first is David Sullivan, executive director of the Digital Trust and Safety Partnership, DTSP. And among many other things that DTSP does, they put out a report last year on guiding principles and best practices for age verification, so certainly timely and relevant.

Second, we are thrilled to have danah boyd, youth expert and author of the book, It's Complicated: The Social Lives of Networked Teens join us today.

And third, Ashley Johnson, senior program manager at the Information Technology and Innovation Foundation, ITIF, who also has deep expertise and a long track record of writing on youth safety, social media, and age verification. So thanks again for being with us.

So let me start with a really basic stage-setting, scene-setting question. To go back to what Sarah and I mentioned at the outset, which is that we are seeing this wide range of experimentation in the United States, state level, and to some extent, at the federal level and globally with age verification laws—why do each of you think that is? What are the reasons that you would like to foreground for everybody thinking about this today that explain why we're seeing this rise of age verification mandates?

danah boyd:

Thank you so much. I think it's a really interesting moment, and I think the first thing we need to flag is that there is a mental health crisis going on with young people. I think we can dive into that and talk about what that looks like, but it is very ecological, there are many facets to it, there are many factors in shaping it. But in the mainstream conversation, it has turned into a singular focus, which is that social media is the cause of that mental health crisis. Now, I think that the data actually doesn’t show that, and that it’s a really complicated picture, but it’s definitely become the main narrative and it’s mixed in, of course, with a national security conversation when we’re talking about TikTok.

So we have this situation where we’re, like,”Oh, let's blame technology.” And this is not our first moral panic around technology, in fact, it seems to happen about every decade. And it’s interesting because this is the third rodeo where the answer is to block young people from accessing technology, and therefore we need age verification. This is actually a third iteration of age verification as the solution to a moral panic around young people. The last round was around sexual predation, the round before that was around pornographic content. And let’s be clear: there’s an even longer history of age verification, which was about alcohol. So we've had these different iterations.

Now the thing to understand, in the United States, about why we go into this frame is that we treat children as property of parents. And this is very specific to the US, and it's important for the audience to remember that the UN, United Nations, has a Rights of the Child framework that really tries to think about where children should have rights versus parents. The United States is the only member nation of the United Nations who has refused to ratify the rights of the child. And the key there is that, in the United States, we believe that children are property of their parents and therefore they should not have independent rights, which I have major objections to.

And the other thing that's unique to the United States is that we don’t associate people with national identifiers. In fact, most countries have some form of national identifier for everyone, including young people. We don’t for reasons that I'm sure my colleagues here will get into.

And so, this dynamic is really not about verifying young people to be protective of them, but to make certain that they are completely controllable by their parents. And when their parents are failing at what the state believes to be good parenting approaches, the state can take over control. And that has a very interesting history here, one that we keep projecting worldwide.

Ashley Johnson:

I agree with many of the points that danah made so I'll just add onto that. I think that another important piece of context when it comes to the current debate surrounding age verification in the United States is that we’ve been trying to solve a lot of issues related to online safety, privacy, [but] we don’t have a federal privacy law [and] there are many other gaps in legislation, not just when it comes to children, but when it comes to all Americans. We’re falling behind basically the rest of the world on a lot of these issues. And I think many lawmakers, stymied by the lack of movement in those broader areas, see children’s safety specifically as an area where it might be easier to reach a compromise. The, sort of, “think of the children” mentality that motivates a lot of legislation can motivate good legislation, but can also motivate bad legislation under the guise of protecting children.

And I think this lack of compromise that we’ve seen on many of these issues has led to, for many Americans, just a desire for something to be done when it comes to regulating the internet. And a lot of people are desperate for that something to be anything and it doesn't matter what the consequences will be. And I think we're seeing that with the age verification debate where a lot of people in support think, “Well, it's better than nothing,” and in many cases, it's not better than nothing. And also that should not be our standard, in my opinion, for what makes good regulation.

David Sullivan:

Plus one to just about everything that both danah and Ashley have already said. I would just add a couple of things. The first is that when it comes to the public conversation around age assurance and age verification mandates in the United States and around the world, I think it is worth noting that much of this debate has been driven by and led by, on the one hand, child safety or child rights civil society organizations, and on the other hand, the providers of different types of age verification technology services. And those are both incredibly important stakeholders who we need to listen to and learn from when we’re talking about these issues, but they are also not the only actors with a stake in how we think about young people online and delivering age-appropriate experiences to young people online.

So, our organization, the Digital Trust and Safety Partnership, which brings together companies that offer some sort of service online where they are conducting trust and safety operations around best practices and standards, our members are thinking that while the actions of individual companies when it comes to these issues get a lot of attention, that the collective voice of companies providing these services when it comes to how they’re approaching age assurance and what they’re thinking about here was absent from this debate a little bit. And so that’s why we formed a working group and put out the report that we did with some guiding principles and best practices last year.

I also think that, as other expert perspectives come into these discussions, you're seeing room for innovation and new approaches and new ways of thinking. So it’s worth noting that in just the past few years, you have had data protection authorities in other countries, particularly in France as well as Spain developing their own technical approaches to age verification or age assurance using privacy enhancing technologies and adding new layers—which are complicated in terms of how they might be implemented—but you can see, by bringing new perspectives to this discussion, we’re advancing what’s being discussed.

The other thing that I want to flag briefly, and we can return to it later, is that there is very active discussion within international standards bodies about age assurance as well as age verification. And we’ve had a chance to take part in that. Last month, I was in Manchester for both meetings of the working group of international standards bodies working on age assurance standards as part of a broader public conference about this topic. So, there’s a lot of activity within international standards bodies that are thinking about this really from a technical perspective, and I think it’s timely now that everybody with a stake in this be thinking about both the public policy side of this as well as that standards piece, as that has the chance to really have an influence on how we think about these things.

Prem Trivedi:

Thanks all of you for those opening comments, which are very helpful in framing up many of the issues where we want to go, which is to say: who are the stakeholders involved in shaping age verification mandates? And then also: where the stakeholders affected?—and the often underappreciated stakeholders who are affected. We also have lots of questions around technical design, implementation, and standard setting.

Let me go to the implementation piece a little bit first because Sarah mentioned, helpfully for us that these terms, they sound at first blush sort of simple when you think about them in the context of IRL (in real-life) interaction—you present an ID, someone checks it, your age is authenticated, you move on—into this [online] space we’re dealing with. That sounds simple and it seems intuitive that it might translate as simplistically online, but in fact, that’s not the case. Sarah talked through how age assurance is a broader term that captures a bunch of age estimation techniques with varying degrees of confidence levels.

And so I wanted to ask each of you, not necessarily to run through those again, but instead, pick the thing from your perspective that is most challenging that you’d like to highlight about how age verification works in practice, or is least understood or most complicated.

Ashley Johnson:

I'll actually highlight two things. One of the things that unfortunately is getting discussed more now but that wasn’t discussed enough, unfortunately, when these laws were being considered was how many Americans don’t have government-issued identification to use for that particular kind of age verification, which is the type of age verification that most laws currently require at the state level. This obviously has huge First Amendment implications because now these people are barred from accessing social media websites in the states where those age verification requirements apply to social media websites. Social media these days is a public forum for sharing ideas, for accessing ideas for social and political debate, discourse, and also just for everyday entertainment, communication with friends and family. These are all very important rights that people have and that shouldn’t be gated by whether or not you have a physical ID card issued by a government that, in many cases, is not doing the best at making sure that everyone has a physical form of identification or coming up with alternatives that are more accessible for everyone.

The other thing that I want to highlight, again when it comes to the social media aspect of things, is how many of these laws also require parental consent for children who are under the required age to access social media after verifying that they are under that certain age. And I would just like to echo what danah mentioned earlier, which is that this puts a lot of control in the hands of parents, which can be a good thing in some households, where parents have their children’s best interests in mind. But unfortunately that’s not the reality in all households, and I think that’s something that lawmakers failed or simply neglected to consider when coming up with those particular provisions.

David Sullivan:

One of the things that we highlighted in our report is that there are these trade-offs between the effectiveness of different types of methods, the sort of accessibility, inclusivity, and equitability issues that Ashley just highlighted, privacy and data protection, the affordability of different methods of implementing and how risk-appropriate they are.

What I might highlight here, one of the hotter topics in this space, is age estimation technologies that do not confirm that you are a certain age but use AI to give confidence that you are above a certain age or below a certain age, including facial age estimation, where you take a selfie and there’s analysis of that that says “yes, that we estimate your age within a certain range.” There is a lot of activity, a lot of research, a lot of interest in this. Companies are experimenting with this, oftentimes layered in with other aspects, whether that is kind of self-declaration of your age or other things. So, perhaps, if you change your age using a service from under 18 to over 18, they might ask you to take a selfie and use one of these [facial age estimation] methods.

Importantly, the accuracy of facial age estimation has challenges in terms of young people: generally it’s easier for these technologies to estimate your age with greater confidence if you’re older. And a lot of the work that’s been really interesting work that’s going on at places like NIST evaluating these algorithms has looked more at older ages and not looked at this 13- to 16-year-old access to social media question, which is such a hot topic right now. Also, the latest results from testing these facial age estimation technologies is that while there are improvements in accuracy over the past few years that are substantial, there are still demographic disparities and that, generally speaking, this kind of age estimation works better for men from Europe, for example, or white men, than it does for other populations, which further adds to the challenges around the equity of this that Ashley highlighted when it comes to document-based verification.

danah boyd:

It’s important to remember, to Ashley’s point, that it’s a lot easier to verify somebody is older than a certain age, when you have different kinds of credit cards or you have different IDs, than if they are younger than a certain age. And that is the really interesting tension because now we have these younger gradations that we’re talking about.

But the other thing that always boggles my mind is that there’s a rhetoric out there, “well, it works in meatspace, so therefore it could work online”. I’m like, “It doesn't work in meatspace!” We have battled this battle about whether or not somebody is old enough to drink since it was implemented, and an overwhelming majority of people drink in the United States before they are 21—to their detriment usually: we actually have the highest level of binge-drinking because of these rules and regulations.

I certainly grew up with a fake ID, many people my age did, and then we’ve watched this battleground of trying to make it so that there can’t be a fake ID. And the result is that what we’ve just done is found new ways and new loopholes for young people to get access to alcohol without even socializing them into a process related to alcohol, which we see in other places. So we’ve actually had a really broken process of this being the fix to our alcohol issue in an offline world. So I’m very confused as to why we’re going to take the same logic into our online world because it has the same set of battles to it.

So a decade ago, I did a study about COPPA, around Children's Online Privacy Protection Act, which is often implemented by saying whether or not you’re over 13. And what was fascinating to me was that well over half of parents had actively lied about their kids’ age to get them access to basic technologies. So parents are already helping kids lie, and that was a decade ago, it’s only gotten worse. There’s all these ways of working around, so it’s a strange framework.

Ashley also brought up parental controls and parental verification. And the thing that I always come to is: schools don’t even require formal parental verification for a kid to go there. The idea is that you have a guardian, they don’t require identifiers, we don’t require [verification of] whether or not somebody is a citizen, we don't require any of these things.

And so, this moment of doing this in this abstracted way for technology—there are going to be workarounds and we're missing an opportunity to actually help young people and actually help parents socialize young people into these technologies. And that's where I'm like, “What are we actually solving for?” And I feel like a lot of times what we’re solving for is a form of compliance and different people being able to make money off of this, rather than actually being there to help young people, which is where my heart is.

Prem Trivedi:

So danah, where you ended, I think echoes the theme that each of you has emphasized, which is how exactly are we centering young people’s interests?, how are we thinking about the questions of state control over children's lives, and also parents' controls over children?, which is to say that this is very much not just a technical problem, it is a social problem, it is a family structure, family dynamics problem. And I think that is often lost in discussion. So I want to stay on that thread for a little bit. What else should we explore here in this conversation about the social dynamics and the motivations for AV, and then the consequences in that context?

Ashley Johnson:

I'll expand on the thread I was talking about earlier when it comes to parental consent and again how this ties into social issues. There are unfortunately a lot of, for example, LGBT children or questioning children in unsupportive households. Online communities can be an incredible place of support for people from any marginalized community, but especially the LGBT community. And many of these laws that require parental consent or that even give parents incredibly detailed, and in some cases, invasive, access to everything their child does online, would be extremely to the detriment of these children—to children of any identity who are in an abusive family dynamic, to, again, all of these children that unfortunately are in households where their parent or guardian doesn’t have their best interests at heart, which is an assumption that we make when we legislate in this way. That is unfortunately a best case scenario that doesn’t apply to every household.

danah boyd:

I agree with everything that Ashley said and I’ll put on a hat, which is I was a founding board member of an amazing mental health organization called Crisis Text Line, which allows young people, and actually anyone of any age, to be able to communicate with a trained counselor when they’re in crisis. And as a result, every night, we see thousands and thousands of young people crying out for help, using technology to get access to support in different ways. We also get to see the range of mental health crises that young people are facing. And let me tell you, it’s not about technology, it’s so many other things, from just interpersonal conflict to, as Ashley mentioned, LGBT issues are huge right now.

The other thing that’s huge right now is that adults are often the problem. Young people don't have access to noncustodial adults that support them. That is hugely impactful when you're trying to address a mental health crisis. And so then we also have ways in which legislation in the United States has intersected in unexpected ways. I’m a huge fan of the Affordable Care Act, as somebody who broke her neck without medical insurance; this is such a good thing. And we’ve done this thing where kids can have healthcare access until they’re 26 when they’re tethered to their parents. But there’s a really weird unintended consequence of this, which is that young people under 26 can’t seek mental health services without informing their parents until they’re 26. This is having such crazy ramifications for being able to get access to mental health services at this moment in this country.

We don't have universal mental health services. Even if you want access to them, your waitlist is often 6, 9, 12 months, and that’s if you have funding to be able to pay for it. So we’ve got a structural issue where mental health is a crisis and where young people are turning to the online world to cry out for help. And what kills me in all of this is that we’re taking that visibility and we're blaming the messenger, we’re blaming the technology as the cause of that rather than seeing it as where people are desperately seeking help rather than figuring out ways of intervening and supporting them where they’re at.

And are there places where technology amplifies things? Yes, but it’s important to note that that’s been true of all media. For example, when somebody is already experiencing a state of anomie, when they're already existentially struggling, and they’re exposed to a TV show like "13 Reasons Why," or the death of a famous person who died by suicide, they are more likely to move along the process of ideation to completion in ways that are devastating, that is also true online.

We need to be able to look out for young people who are crying out for help rather than trying to find ways to make them more invisible to the people around them and building more walls around them. And that’s sort of killing me here. We are also seeing massive structural issues that just break my heart. Climate anxiety is through the roof and it’s one of those issues that I don’t think, as Americans, we’re even realizing the consequences of this for young people. We’re seeing how our policies are actually splitting young people by red state and blue state. So there is so much going on in the mental health space that I want us to be resourcing, that I want us to be supporting young people about. But this doesn’t get fixed by using young people as an excuse to go after tech companies. That doesn’t help young people in many cases, and as Ashley pointed out, makes things far worse for the most vulnerable members of our society.

David Sullivan:

I would just add to that examples of how information controls that are ostensibly designed to protect young people are having unintended consequences or unfortunate effects. A good example of this is there was an article, I think it was last week, from The Markup, looking at effects of filtering of content in schools and libraries, and how that has had the effect of blocking young people’s access to mental health resources, to research about all of these critical issues that danah and Ashley have just mentioned. And while that is not the same as age verification or age assurance, it’s still about putting in place controls that can restrict access to information that could be part of the solution here.

Prem Trivedi:

There are, as David noted at the outset, different experimentations with technology that is increasingly privacy protective—you’ll often hear the phrase zero-knowledge proof, or at least minimal knowledge proof. And this is to get at one thing that I wanted to have each of you comment on and think through: largely there is some cryptographic understanding of a solution set on how to do age verification in a pretty privacy-protective, identity-protective way, which is to say, basically, we want to minimize the information that the website or service that is seeking to get a thumbs up or thumbs down on whether the person meets the age requirement. The third-party verifier is sitting in the middle, a trusted body that is meant to do that check, and then ideally just send a yes or no signal to the website saying, “let this person in,” or “they do not meet the criteria,” and you would limit the ability of having other signals collected and joined about the identity or other attributes of this person. That's generally the idea at a high-level.

I want to posit a little bit of a thought experiment. Let’s say we had the best sort of tech implementation out there, something we feel tackles the power dynamics, tackles privacy issues as best as we can. How would that make you all feel? Would that alleviate the concerns that yo’'ve raised now, the sort of rich layers of concerns about implementing age verification mandates?

Ashley Johnson:

I think alleviating the privacy and access concerns would go a long way. I still don’t personally believe that age verification or estimation or basically these age-based age gating approaches to safety are the best possible way that we can address the potential harms that face children online. Again, I definitely agree that this hypothetical, perfectly privacy protective, everyone is able to access it, technology means that age verification is certainly better. But I think many lawmakers are approaching the issue of children’s safety from this very narrow lens of, first, we have to know exactly who is a child online and every different service that they use. And I don't think, for many issues, that that is where we should be starting.

David Sullivan:

I think that it is really important that folks on all sides of this debate look at the technical details. Something that I worry about is that folks who have been critical of age verification mandates have—I agree with danah’s characterisation that there’s a bit of a moral panic going on right now when it comes to social media and young people, but I worry that there is a little bit of fighting fire with fire that’s going on here, where people are conflating age verification with things like facial recognition, which is fundamentally different. And the result of that will be that policymakers will disregard the criticisms that are being made and enact these kinds of mandates over the objections of folks who are raising significant issues but not doing them in a thoughtful way.

So I don't think we need to respond to a moral panic with a moral panic, but instead, look at some of the details, but sort of back it up and ask what our highest priorities should be when it comes to providing experiences online that will help young people with that mental health crisis as opposed to hinder it. And then look at different types of services, which may have different types of needs when it comes to establishing confidence about the age of people using them. So I think we need to get the order of this right as well.

danah boyd:

For me, I think that when we center something like age verification, we’re centering technology rather than centering young people. And we’re trying to provide a technological fix to a social problem without even figuring out whether or not that is what will actually help, when we’re like, “Oh, well, let’s just obsess on making sure that the technological fix is the best technological fix.” And I'm like, “Social problem, not technology problem!”

And the next layer to it is we’ve now put “legal” in here to mandate a technological fix to a social problem. And this is what Maria Angel talked a lot about—we wrote a paper about "techno-legal solutionism," which is a particularly snarky response because within the world of science and technology studies, we talk a lot about how problematic it is that the technology industry tries to fix all the things with technology, which is techno-solutionism, and we’re now mandating technological solutionism by law, which seems ridiculous in my mind.

And so, for me, I'm like, “This is a social problem that we're trying to tackle, we need to understand the facets of the social problem.” And when it comes to having young people having healthy relationships with anything, it is also a socialization process, it is about building resilience, it’s not about turning a magical age and everything being great.

And I go back to the alcohol example, which is that, do some people have an extremely unhealthy relationship with alcohol? Absolutely, we can call some of that addiction. Do most people have a perfectly reasonable relationship with alcohol as adults? Yeah, frankly, most people drink a little bit and it's not a huge deal. Some are really funny, some have joy, some are kind of dark and terrible, some people are just relaxing. There’s a lot of layers here, but we have a binge-drinking problem with young people because we’ve imagined that they should not drink until they magically turn an age, and the result is that we’ve actually turned alcohol into this thing that’s like a proof of adulthood. And the result is you have this really unhealthy, not managed process rather than a socialization process.

And I’m not advocating to give a four-year-old alcohol, nor am I advocating to give a four-year-old social media, but I am advocating for the idea that this is a process, a practice, a way of learning about this.

And if we're going to create a condition where every parent out there is on TikTok all day long, that’s part of the problem. Kids are like, “Ooh, what is that? I want access to that.” So we have to actually look at this holistically and not say, “Oh, you can’t have access to this until you are 18.” That is a terrible outcome because they’re just going to find workarounds. We need parents, we need adults, we need everybody being like, “Okay, let’s talk it through. What is social media? What makes it healthy or unhealthy? How do you have a good relationship with it? Etc.” That is a socialization process, and we don’t get there by creating age gating.

Prem Trivedi:

There’s two big themes I’d like us to consider in about the 12, 15 minutes we’ve got left. The first is a question that your remarks, each of you, and especially you, danah, just now teed up, which is: what are some alternatives? Because it is clear that lawmakers are grappling, many of them in very well-intentioned ways, with the mental health crisis, danah, that you mentioned, and with a host of other challenges that present specifically at the intersection of online content and offline harms. So we’re going to keep seeing these attempts to keep kids safe online through legislative mechanisms, through the use of state power and guidance. With some specificity, what would you say to lawmakers? It could be a US audience, it could be a global audience. I understand now, you can’t push a technical button, but I want to do something and we must do something, there’s an imperative. So what do we do?

danah boyd:

I think we have to bucket two things separately and not conflate them. There is a mental health crisis; let’s address it: let’s get young people access to mental health services universally. We’re starting to defund 988—there’s a whole set of things that can be done there. There’s a lot of things that can be done in terms of providing more resources into schools and other places where they’re at. We can build a digital street outreach program to be able to look out for young people that are really struggling when their parents are not available to them. There is a lot that we can do in the mental health space, but that requires centering young people and their mental health. And technology often doesn’t come into that except as being a service provider or point of access.

Then let’s center what's going on in terms of wanting to regulate the technology industry, which is no doubt out of control. That to me starts with data privacy provisions, which I know Ashley has a lot more to say about, so I’ll leave that to her. That just starts with taking seriously how to protect all people. Because I think it's always weird when we’re like, “We’ll fix it for young people,” and I’m like, for example, “see the disordered eating dynamics related to advertising that's going on online? Hint: the major problem there is menopause, not teenagers, so what does that mean?” When it comes to misinformation, we’re like, “teenagers!”. I’m like, “Nope, that’s the 65-plus crowd!” So this is where when we want to center a bunch of things that are happening with technology and I’m not convinced age is where we start, and I’m also not convinced teenagers is where we start; a lot of the problem is for older populations.

Ashley Johnson:

I completely agree. I think step one should be what this debate started out trying to avoid, which is passing a federal privacy law, where we’ve been trying to focus on these smaller cutouts of technological issues that we think are going to be easier to solve because it’s easier to present things that are harms facing children as an urgent issue versus harms that face adults, or realistically people of any age, but we do need a federal privacy law. We’re, again, behind the rest of the world in that respect, and states are passing their own laws that conflict with each other and give everyone, depending on where they live in the country, different privacy rights. We need every American to have privacy rights online.

There are also, again, other issues when it comes to our use of technology that have been framed in a way in this current age verification debate as being children’s issues, but which, as danah echoed, are everyone issues. Mental health is a youth issue, but it’s also an everyone issue. Eating disorders, self-harm, suicide are youth issues, but they’re also people of all ages issues. They are not just a technological issue, they’re an in the meatspace issue, first and foremost, honestly, because all of these problems have long predated technology and have cropped up in different forms depending on what the dominant form of media has been.

And then, again, the separate bucket of technological issues: there’s a lack of transparency, we need legislation when it comes to that. Security is another concern related to privacy, a data privacy law should also deal with cybersecurity requirements, reporting requirements, things like that. We need all of this base level legislation. We’re trying to tinker with these smaller issues without having the foundation there, and I think the foundation is where we should start instead of building on top of a foundation that doesn't exist.

David Sullivan:

I would just briefly add that I think much like a federal privacy law can protect all people’s privacy, including young people, that continuing to really advance and mature the trust and safety practices of technology companies as a whole, having those become more mature, standardized, thinking about things like safety by design. Those are ways that can be gotten at through industry and multi-stakeholder efforts that can help protect the rights and the safety of all people, including young people.

Prem Trivedi:

Thanks to all of you. If I can invoke moderator's privilege to editorialize for a minute, I'll just say that I’m in agreement with what each of you said around a couple of those buckets. One, David, is this question around greater maturation and trust and safety, better transparency provided by companies about what that means in context, meaningful transparency—not really metrics but meaningful transparency. And I think I will strongly underline, as OTI has done very persistently, the need for federal privacy legislation for baseline protections that addresses—certainly they will not address all ills on the internet, and that’s not the claim—many of the conflated harms that creep into the age verification conversation where age verification is held out as sort of a silver bullet solution. In some instances, they can be addressed in significant part through those sorts of legislative interventions. And I think as you noted at the outset, danah, the United States is lagging on these fronts, and so we need urgent action there, that’s certainly OTI's perspective.

We got some questions from the audience that I wanted to note quickly. I think you all have just anticipated and answered in part, which is to say there’s a lot of criticism in this discussion of age verification as a potential solution, but shouldn’t we grapple honestly with harms that young people face online with respect to pornography, with respect to other violent content, content that promotes suicidal ideation or self-harm. I think you've addressed that in many respects, but I wanted to give you all a chance to respond specifically. Seems as though much of this discussion is saying age verification is not the solution and yet there are these real harms. Anything further that you’d like to highlight briefly?

David Sullivan:

I think it is worth thinking very specifically about specific types of risks and harms when it comes to young people. Access to pornography or gambling or things like that are different from questions around child sexual exploitation and abuse, and questions about keeping adults away from children online, which is different from a lot of the mental health issues. And I think we need really subject matter-focused experts thinking about those specific risks rather than painting with a broad brush that all of this is one thing that can be solved through one measure.

danah boyd:

I also flag the historical way in which we dealt with it or talked about it was “content, conduct, contact.” Which is to say, what are the content issues that people should have access to or not have access to? That is everything from advertising to different things that are influencers all the way down. Conduct is everything from bullying and harassment and things that are actually all sorts of different dynamics. Contact is when we are thinking about predation or when we're thinking about really problematic stranger-related dynamics.

And I think part of it is also when we start to look at the data, it’s a lot messier than I think people really appreciate, and it’s really messier depending on which populations we’re dealing with. I was fascinated by a European study—because for those of us who’ve been studying young people and their relationships and mental health more generally, you can’t run certain experiments. And so, the pandemic ended up creating a natural experiment for a bunch of us, which is that what would happen to people’s behavior when they were all online and not allowed to interact with people in meatspace, not allowed to go to school, not allowed to go to the park, not allowed to do anything that’s actually interpersonal.

And something weird happened, bullying went down during the COVID pandemic, which shows us that one of our problems is schools’ relationship to what’s going on online. I’m never going to argue to ban kids from going to school, that’s a terrible plan, but I want us to understand the relationship. Just like when we started diving into the predation issue, predation is not strangers, predation is that uncle that you don’t allow to go to Thanksgiving reaching out to your kid.

There’s a big set of problems here. But, to David's point, we have to unpack them, we have to actually grapple with what’s going on here. And we have to grapple with the fact that when we come to these conversations, we imagine the ideal parents who're just struggling to keep their kids safe online. That is not the majority of what’s happening out there, the majority of it is that we have parents that are completely overwhelmed, they're not present for any number of reasons, they themselves are struggling with massive mental health issues or different aspects of addiction, or different aspects of stress, single parenting, a whole set of dynamics with adults where they’re just trying to make the day go by and they don’t even know what’s going on with their young people.

The other piece of data that’s killing me right now, and I mentioned it earlier, is noncustodial adults. When we look at resilience in mental health crises, one of the things we pay attention to is what adults in a young person’s world are looking out for them, making sure they’re okay, and who is it that young people can turn to when they themselves are not doing well, and the answer cannot and should not be parents, which is why we go with noncustodial. So you look at the aunties and the coaches, you look at the teachers and the mentors and the pastors, all of these folks. And the thing is that we often talk about young people's access to public space having declined—since the 70s, by the way—that is true, and so has their access to noncustodial adults. And again, for all sorts of reasons, people not living in the same area as folks that are around.

But that’s been declining for years, COVID completely ruptured it, so people end up having just their parents to turn to. And coming out of COVID where young people are starting to interact with adults again, those adults are so maxed out that they’re not there. Ask a young person today about whether or not their teachers and their coaches have the energy and resources and space to look out for those in their peer group who are struggling—they don't. That should be terrifying. So, for everybody listening out here, whether you'’e a parent or not a parent, go and figure out young people in your life that you can be that auntie to, be there for them. We need to rebuild that social fabric desperately because that is a critical form of resilience to deal with a mental health crisis that we are completely lacking at this point.

Prem Trivedi:

This conversation has really richly foregrounded and teased out for people why this is not just a technical question or a technical problem with a technical solution. There are social dynamics at play, there are questions about mental health and different strata of society. And as each of you has reminded us, we need to disaggregate those with some specificity, we need to talk about which harms we’re worried about, which solution sets we think are going to deal with them.

And nevertheless, I do want to return to the question of technology and policy specifically, and it’s helpful to do that now as we wrap this conversation up with everything that you’ve put on the table. We’re still seeing and we are going to see, I think, the rise of online age verification mandates and experimentation, calls for study, things of that nature. And so, what would you say then to lawmakers and regulators who are experimenting with rules already on the books and implementing them, from a standard setting perspective and a principles perspective, what are the things that they ought to keep in mind? What are the guardrails they should build? What would you tell lawmakers about how to do it responsibly?

David Sullivan:

There’s an ISO (International Standards Organization) process to develop voluntary consensus standards for age assurance. And I think that is a place where we can have thoughtful conversations about the trade-offs between different approaches and we can look at different types of implementations, create optionality, create room for experimentation and innovation within this specific area. That process is not easily accessible, but is moving to a new phase where national standards bodies will be reviewing what’s called a committee draft of a framework for age assurance standards. And I would strongly encourage everyone who is really actively interested in the space, via their national standards institution—in the United States, it's called ANSI—to get involved in that process because we need to have all the different perspectives at the table for that to be as robust as possible. And I think that is a useful place to channel those conversations, as opposed to looking to swiftly enact mandates.

danah boyd:

I guess, for me, when it comes to age verification, my answer is simple: just don’t, and the only way I’m willing to even start with a conversation about standards is when we implement the standard of the UN Rights of the Child, and when we ratify that and we’re willing to actually implement it and protect vulnerable young people, then I can talk about these structures. But until then, this is just a property protection of parents in a way that really harms vulnerable youth. So I can’t get behind it in any form in the United States under the current conditions.

Ashley Johnson:

If I was to give one final point, it would be that we shouldn’t be trying to solve these granular issues without having solved the broader issues behind them. And from a technological standpoint, that is, as I mentioned, establishing these basic rights and responsibilities for the entire country. And then on the flip side, the social side that is having these difficult social debates that we've just been avoiding by saying, “Well, every parent is going to do it differently,” and putting all the responsibility on the parent, which is something that I would say most parents are not equipped to handle simply because that should not be solely their job.

Prem Trivedi:

Thanks to each of you for joining us and for what was a really rich discussion. So thank you all for joining virtually today for this discussion. And special thanks again to Sarah Forland, lead drafter of OTI's report. Thanks again everyone. Have a great day.

Authors

Tim Bernard
Tim Bernard is a tech policy analyst and writer, specializing in trust & safety and content moderation. He completed an MBA at Cornell Tech and previously led the content moderation team at Seeking Alpha, as well as working in various capacities in the education sector. His prior academic work inclu...

Topics