Home

Donate
Podcast

What to Expect from US States on Child Online Safety in 2026

Cristiano Lima-Strong / Jan 11, 2026

Audio of this conversation is available via your favorite podcast service.

2026 is poised to be another landmark year for the child online safety debate in the United States.

In recent years, states have passed dozens of bills aimed at expanding protections for kids as they navigate risks on social media platforms, AI chatbots and other pools, with more likely on the way. Lawmakers in Washington, meanwhile, are considering a flurry of proposals that could set a national standard on the issue. But many of these efforts are facing legal limbo as industry and some digital rights groups allege they violate constitutional rights and trample on privacy.

I spoke to three experts tracking the issue to assess the current policy landscape in the United States and how it may shift in 2026, particularly as state legislators continue to take up the cause:

  • Amina Fazlullah is head of tech policy advocacy at Common Sense Media, a group that advocates for child online safety measures. She previously served as a tech policy fellow for Mozilla and as director of policy at the Benton Foundation.
  • Joel Thayer is president of the Digital Progress Institute, a think tank that advocates for age verification policies. He previously clerked for Federal Trade Commission official Maureen Ohlhausen and served as policy counsel for the tech trade group The App Association.
  • Kate Ruane is the director of the Free Expression Project at the Center for Democracy and Technology, a nonprofit that advocates for digital rights. She previously served as lead public policy specialist for the Wikimedia Foundation and as senior legislative counsel for the ACLU.

What follows is a lightly edited transcript of the discussion.

Cristiano Lima-Strong:

Kate, Amina, Joel, thanks so much for joining us. I wanted to set the table for our discussion a little bit before we got started. We have seen an explosion of activity around children's online safety in the United States in recent years with states passing dozens of bills to require social media sites and AI tools to implement greater safeguards for kids, to require companies to vet users' ages and to prevent harms like harassment or abuse.

Many of the highest profile laws have faced industry challenges or been blocked with their statuses still being hashed out in court. Meanwhile, lawmakers on Capitol Hill continue to have discussions around potential federal standards that could override many of these state laws. That all signals that we are in for yet another pivotal year for children's online safety in the US in 2026.

So with that, we thought it'd be a great time to check in with some of the folks who have been tracking this most closely to get a sense of where things stand and where we may be headed this year. I wanted to start off getting a sense of the state of play at the state level where legislators have been very active.

Kate, would be curious to hear from you. What do you see as some of the main approaches or most notable models that states have sort of taken up to try to address the varied concerns around children online safety in the US?

Kate Ruane:

Yeah, sure. And I invite my co-panelists to jump in here because states have been doing quite a bit and quite a bit in a kind of varied way. I tend to think about it in a few separate buckets.

So the first bucket is the various age appropriate design codes that we have seen being enacted starting with California's, which has now been enjoined multiple times by courts for potential First Amendment violations. But other states have attempted this similar model and have tried to tweak it to deal with some of the issues that the courts have identified.

Another version of this is to require age verification by websites for access to content on the websites. One of the most common ones of these is requiring websites that host a large amount of adult content to do age verification of all of their users to ensure that people under the age of 18 aren't accessing that content. A law like this was actually upheld by the Supreme Court recently. A law from Texas was upheld by the Supreme Court.

Another version is to sort of ban children from accessing social media, generally either children under the age of 16, the way that Australia does or children under the age of 13 from accessing social media at all. These laws are all facing First Amendment challenge as well.

And then another way, another kind of bucket of laws that is cropping up and that has recently started to see some court action are the laws that require app stores to do age verification so that apps can then have that data and know the age categories of their users. Those laws are also now being challenged in the courts.

So we're seeing a lot of activity both at state legislatures to try to require companies to amend the content that they deliver to children or to prevent children from accessing certain content or certain app stores, and then also to require these types of entities to engage in age assurance or age verification so that they know the ages of their users. And I expect that trend to continue. We're also seeing a lot of court action around those types of efforts.

Cristiano Lima-Strong:

Yeah. Does that align with your view of the landscape? I mean, are there other sort of notable trends that you've been tracking and in terms of what states have been doing so far?

Amina Fazlullah:

Yeah, that definitely aligns with what we are seeing. I would add to that the social media labels, which is a newer approach following the Surgeon General's op-ed piece and call for social media labels, that has been passed in three states already, so Minnesota, California, and New York. And there are a lot of similarities between the versions in California and New York. So I suspect we're seeing a model for that type of legislation that we'll likely see come up in the year ahead.

Kate Ruane:

I would throw one more in there too now that I just thought of it. And also just specifically laws directed at generative AI outputs, some of which would require age verification and either ban children from accessing those types of services under a certain age, require labeling or disclosures that it is not a person that the user is engaging in, or requiring disclosures or preventing the chatbot from producing output, attempting to prevent the chatpot from producing output that claims that it's like a medical professional or someone that is qualified to give certain advice.

Amina Fazlullah:

I would just add to that that on AI and kids, I think there is a lot of interest and concern in state legislatures. And I think we are seeing already codification of some baseline guardrails. So as Kate mentioned, notifications, mental health redirects when you're talking to a chatbot so that you know that you're talking to a chatbot, that if there is concerning or crisis intervention that's required, that there is some kind of mental health redirect.

Those baseline guardrails I think have popped up in at least two states, in New York and in California, but there's also interest in more comprehensive safety measures. So looking at things like risk-based audits and also as Kate was alluding to more limitations on the operation of harmful chatbots to kids.

Cristiano Lima-Strong:

So Joel, as Kate had alluded to, at the heart of this is age verification. If you're crafting laws around children on the internet, you need to have a sense of how do you figure out who's a child, who's a teen. This has been an issue that's been contentious and is still being sort of hashed out legally, but states have been taking a few different approaches in terms of thinking about who should be responsible. Some have focused on platform sites, some have focused on app stores, some on the manufacturers. Could you talk about how states are splitting on this issue and your view of the different approaches?

Joel Thayer:

As you can imagine, you heard a multitude of different approaches to resolve issues that are pretty, I think it's fair to say acute issues that you're seeing in the digital space with respect to kids, their interactions with these applications.

It really comes down to where the politics are in the various states. And when it comes to age verification in particular, it's a pretty easy political sell to want to block pornography sites, for instance, from kids.

But on the age verifications, again, we have seen some success in the courts. Kate alluded to the Texas Foreign Law that was just upheld by the Supreme Court under intermediate scrutiny. There's some good dicta in there to explain that this could be applied to in general applications, but typically the court wavered on the type of content that would fall under this category, specifically if it's adult content or obscenity to children, those types of applications seem to be in play. The question is, what does that mean?

And I think courts are starting ... well, legislators are starting to look at that and saying, "Okay, well, what kind of guidance does this give us for general applications like social media?" And you saw that with Florida and their law that was directed specifically at imposing age verification mechanisms and also a straight up ... I wouldn't say it was prohibition, but basically design prohibitions in their law that was upheld by the 11th Circuit.

And so courts, I think right now everyone's testing it out, but I think the apps for age verification mechanism seems to be something that most folks understand a little bit more. And then you also have the added benefit that Apple and Google basically run the gamut on most, if not all, mobile app distribution. And you're seeing most of the harms that are alleged happening in the mobile space over your standard laptop.

I think that it really comes down to a couple of things. One is how comfortable the legislators are with the case law, at least those are the conversations I've had. And two, do they want a more targeted prohibition? And three, what are the harms that they're actually trying to address? So it comes down to those three factors for me mostly. But in terms of an actual split, I'm getting the sense that every state wants an all the above strategy. Everyone who's worked with state legislators, I'm sure everyone on this call has, everyone has their favorite flavor of ice cream. And so they all have their various solutions. And it seems like states are pretty eager to test out these theories and test out these applications of laws. But I don't see like a 50-50 split or a 20-80 split. It really just comes down to what the political will is for the particular legislator and where they see the most acute issues and what they think the most effective mechanism is.

Cristiano Lima-Strong:

So Kate, you alluded to a lot of the court challenges that we've seen. How much is actually, of these laws that we're talking about has actually been able to take effect to be enforced? And how much of this is still up in the air in terms of ongoing litigation?

Kate Ruane:

There's a ton of it that's still up in the air in terms of ongoing litigation. Very few of these laws, as far as I can tell, have been allowed to actually take effect. And to the extent that they have, it has been, even with a caveat from the justices of the Supreme Court that they actually think that the law probably violates the Constitution.

Joel mentioned that he believes that there was some dicta in the case about the Texas law that relates to online adult content, that there was some dicta that indicates that it could be acceptable as applied to generally applicable content. But I actually disagree with that interpretation. I think the court in FSC versus Paxton, which is the case that dealt with the Texas law that requires age verification to access websites that host a certain amount of content that is obscene as to minors, was pretty clear that they were only talking about access to content that is not constitutionally protected for minors to access.

Minors have the constitutional right to engage in and access a lot of speech as long as it is not obscene for them. In fact, they have a constitutional right to access all constitutionally protected speech as long as it is not obscene for them. And what we've started to see is courts grappling with some of these state laws which would block or burden minors' ability to access constitutionally protected content on social media or constitutionally protected content through app stores, for example. And courts by and large are saying, no, that type of a restriction is subject to strict scrutiny, unlike the FSC versus Paxton court's application of intermediate scrutiny to access obscene content. And they are saying that this probably violates the Constitution.

We just saw it happen with respect to Texas's App Store Accountability Act where I believe the district court said that, likened it to creating age verification requirements to enter a bookstore.

So we are absolutely seeing courts begin to grapple with the significant First Amendment and free expression issues here, but we're also seeing them think clearly about it and say, when you apply this generally, when you say that children aren't allowed to access publicly available spaces and engage in speech that is constitutionally protected for them, you have a very high bar to clear and states currently haven't done it. And I do kind of think that state legislators should be thinking really deeply about that because no law that we pass, that any state legislature passes that runs into constitutional barriers like this is going to protect a single child in any way.

Joel Thayer:

Obviously, we have a firm disagreement on exactly what the court said. And I think the dicta I'm referring to specifically is Justice Kavanaugh going in there and describing the burden that age verification actually applies and the burden with respect to what time we're actually in. So he basically points out and says, this is basically, in his words, a modest burden on adult speech. And then he goes on to say that, well, with the advent of smartphones, with the advent of devices in your pocket, it is unimaginable that the courts in Reno and Ashcroft, which are the cases that are mostly the ones that were guiding the courts up until that point, and even before the TikTok case, that this is a completely different scenario.

I think you're seeing a court that's far more open, or at least a Supreme Court is far more open to evaluating this not on total strict scrutiny territory. And indeed, he even said that the idea that age verification or age verification mechanisms already trigger strict scrutiny is already inappropriate. They're going to evaluate it based off of what the law is. They're not going to say there's a categorical rule of strict scrutiny now. I think Kate did a very good job of explaining that, look, well, there's some ambiguity here. It depends on what you're trying to do.

I do think that the complication here is how do you apply Paxton? How do you apply TikTok v. Garland? How do you apply the NetChoice case or the NetChoice case in Moody? And it's tough. I don't think it's an easy analysis to say. The courts seem to believe that these, whether it's an app store or any sort of social media that it's accessing to a library, I think of course getting more skeptical of that. You saw that in the 11th Circuit, you're seeing that with the Supreme Court not upholding injunctions, obviously in the Tennessee law which the Supreme Court was reviewing and where they didn't ... decided not to uphold the injunction. They did, I agree, had a caveat. Said, "Well, let's see how the likelihood in the merits' case works out and then we'll review it."

But again, I think you're seeing high level of skepticism that age verification mechanisms is immediately this barrier of entry on the front end. On the back end, it's also very clear to me that, or it's becoming clearer that they are looking at the laws themselves and not just saying restriction on social media means strict scrutiny. I think it's going to require NetChoice to do, and CCIA and others to do a little bit more legwork on describing what their speech interest is.

Because again, part of the issue that you saw even in the TikTok case was a fundamental question. What is the speech interest that you are trying to protect? And also, are we protecting the listener? Are we protecting the speaker? Maybe it's a bit of both, but at the end of the day, they are going to have to articulate what that speech interest is, which the tech companies have not done a very good job of, at least when it gets up to the Supreme Court or the appellate level, particularly at the Fifth Circuit where the App Store Accountability Act will be reviewed, which is a pretty favorable, I think, jurisdiction if you're one of these laws.

So again, it's a very much of a mixed bag, but I just wanted to make sure I responded a little bit to that because I think I was called out a bit.

Kate Ruane:

I totally appreciate your perspective. And obviously we're going to disagree on this, Joel. That's just how it's going to be. And that's okay. That's okay. And I acknowledge that-

Joel Thayer:

I just want to make sure that I at least got on record of like-

Kate Ruane:

Absolutely, absolutely. And I'm glad you did. I'm glad you did. I do want to respond just briefly to a couple of the things that you said, and I want to make sure that I'm responding correctly. You're talking about FSC versus Paxton, which was authored by Justice Thomas, right? There's no concurrence in Justice Kavanaugh in there.

Joel Thayer:

Oh, sorry. Yeah. Yeah.

Kate Ruane:

I'm just making sure that I'm thinking about the right thing. Okay. I hear what you're saying in that case that they did call age verification a modest burden. But they were doing so as you were pointing out in the context of speech that is not constitutionally protected when delivered to minors. That is a different ballgame. And the court was clear within that particular opinion that it is a different ballgame when we're talking about the targeting of constitutionally protected speech for which strict scrutiny remains the standard. And I think that that's an important distinction that the court in FSC itself did make.

I will stop there because I know we want to talk about other things, and Joel and I can go back and forth on this for hours and hours and hours, and he's made good points here and I get it and there's going to be legal arguments in the courts and we'll see how that shakes out.

Joel Thayer:

Yeah. And I'll just add one or two things. Look, First Amendment jurisprudence is very messy. It's not like there's a linear line between when this fits into one category or another category. I think what the courts are doing, and I agree with Kate, is that they're trying to firm up a little bit more of what they're talking about. So again, I don't think that Kate and I are that far apart on where things ultimately are. I think it just turns into ... When you get two lawyers in the room, you're always going to have varying opinions, but no, a very much briefing case perspective on this. And I think ultimately we're going to see a little bit more fleshing out of what it means to have a First Amendment, what is the speech interest that we're talking about? And I think that's what the courts are trying to grapple with as it goes through the appellate process.

Cristiano Lima-Strong:

Thank you both so much for that. It's really interesting. And I think it speaks to the fact that the jurisprudence is still very much evolving and perhaps unsettled to an extent.

Amina, I did want to get your thoughts on how state legislators are sort of dealing with that and grappling with that. Are they refocusing efforts? Are there some approaches that you think might gain more momentum in light of some of this and how it's factoring into priorities at the state level?

Amina Fazlullah:

As we just heard, there's a lot of uncertainty. It's somewhat unsettled. And so these questions are front of mind to state legislators. So I think as a result, you're probably seeing more of an attraction towards policy measures that are focused on feature-based legislation to try to avoid content issues. You're seeing, like I said before, I think a new interest in the social media warning label, which has its own pathway to First Amendment issues as a label, but that's sort of different from the discussion that you just heard. And then like I said, in the AI space, I think the focus is on the product and how it's interacting with the customer or the user and what are the duties and responsibilities there.

And so I think you're seeing legislators move until I think things are more settled, move away from or not, expand, expand what they're working on to include maybe different territory like features-based legislation or legislation that's focused on improving and protecting kids as they use AI products.

Cristiano Lima-Strong:

So talking a little more concretely about 2026 and what we might see this year, Joel, would be curious to hear from you, what trends do you think are likely to continue, intensify where might states reorient their priorities in terms of what we get this year?

Joel Thayer:

So I think you're going to see a lot of the same. I don't see them deviating too far off of the age verification issues. I do agree with Amina, and I think Kate also remarked on this as well. Generative AI is going to be a major political boondoggle, especially now that you have the White House putting out their AI EO, which I think put a steroid in the water with respect to a lot of these different conversations happening at the state level. And you're seeing a lot of federal legislators responding to that too.

And the question is like, who's going to get to the finish line first? Is it going to be a federal law or is it going to be a state law? I mean, even though I have significant doubt that a AI EO is going to prohibit the enforcement of all of these state enforcements, I do think that it has added some extra little umph or a little more pep in the step of a lot of these legislators who are trying to solve these solutions at the state level. And it's almost like a call to them to figure out more or to invite them to find different solutions, especially when it comes to generative AI. But my instinct is that you're probably going to see more of that too.

I think the issue of chat, I mean, for lack of a better term, like the chatbot issue is something that constantly gets brought up in every conversation that you have, whether it's describing age verification from the app store level to social media or whatever, there's always this conversation, whether rightfully or unrightfully turned into a, "Well, what about chatbots and how do we stop that?" So I think that that is going to be another focus. You're seeing that both at the state level and also at the federal level. Senator Josh Hawley with Senator Blumenthal have already planted their flag on that. I think you're going to see more of that.

And you saw bits of that in the ENC hearing with all the kid bills. Of course, it was more of like a footnote more than it was one of the marquee conversations they were trying to have. But I do not see that going away. And especially now that you're having a pro accelerationist perspective from the executive branch on AI, there's going to be those questions from child safety groups like, wait a minute, well, we don't want to ... We're all for winning the war on AI, but we don't want kids to be the casualties of that. So how do we to use a pun and the terrible pun, how do we split the baby on this? And how do we ultimately get to a place where the child advocates feel like they have been heard, listened to, at the same time, promote this idea of AI dominance?

And I think that's going to be the major fight, both at the federal and state level, but I think you're going to continue to see a lot of the same with respect to social media and also with age verification, especially as these court cases come through. I think that's the state of play in my mind, but very open to hearing it where Kate and Amina feel like that things are going.

Amina Fazlullah:

Yeah. I think that there is definitely tension between the AI preemption efforts at the federal level and the efforts at the state level. And I think it's interesting to hear Joel say that it's boosted enthusiasm potentially at the state level to do more. I'm not sure that that's the case, but there's certainly a spotlight on what states are doing in a way that might not have previously been there. And I think there are a few different components of kids' AI safety efforts that we're seeing. One is, like I said, the sort of risk-based assessments and audits. Another is around updating privacy protections for the inputs of kids. That's a pretty big gap considering how kids are using generative AI chatbots already and our understanding of the willingness of children to divulge incredibly rich and personal information about themselves and others through chatbots. So it's a pretty big hole.

There's also a new interest in targeted advertising specific to adding AI products into the ecosystem, but then there's the suite of bills that are looking at setting baseline guardrails and then going beyond that to try to get specific about harmful features of chatbots and really digging into issues around manipulation and other sort of harmful features from the chatbots.

I think driving this is some of the research we've done and others have done to lift up this usage that's emerging and how quickly it's coming into this scene. We found 70% of teens are already using what we've described as AI companion. So having more of this relationship dependent relationship with an AI chatbot, about 50% of them were regular users and 30% were already preferring conversations with the chatbot over or similarly to other humans. And that was research that we did many months back.

So the pace of uptake of these products and the potential impact is pretty dramatic. And so I think that's what's animating a lot of legislators right now.

Kate Ruane:

I just want to add to what Amina and Joel have both said and create kind of a wishlist for myself a little bit here too, because I'm not sure ... A lot of what Amina just said made me think that I also wish that state legislators were really looking into the privacy aspects of generative AI models with respect to everybody, but also specifically with respect to kids. Because as Amina highlighted, they are having intimate conversations with these companion systems. They are telling them things about themselves. The system is ingesting this data. What is it doing with it? Does it have any restrictions at all on what it can do with it?

And as these systems and as these companies that are offering these systems seek to monetize these services, how is that going to interact with kids' data and what are the restrictions on how they can use that data? I'm not sure we're seeing enough energy behind that question.

And it also leads to my second point, which is it's another reason that it makes no sense for the federal government currently to prevent states from engaging in legislation in this space. We have not seen the federal government step up and engage in creating comprehensive consumer privacy legislation that applies to social media, let alone to generative AI systems. If they were to broadly preempt state's ability to protect kids online or to protect kids' data, we could see a blocking of efforts that could create some significant protections for harms that we see coming down the pike.

Cristiano Lima-Strong:

You all have talked about the executive order dealing with artificial intelligence legislation at the state level. There's also the specter of federal legislation that could preempt state laws. Kate, Joel, you recently testified on this on Capitol Hill. How are you thinking about and what will you be watching for in terms of the interaction between states moving ahead on some of these issues and legislators on Capitol Hill looking at standards that could potentially override them in the coming year?

Joel Thayer:

Well, I think this is the area where Kate and I actually agree. And when I say, I don't mean actually we never agree, but I think in this area, we were both pretty skeptical of the broad preemption and it relates to standard. And that was one thing that I ...

As an organization, we are for bipartisan solutions that are incremental in approach. We don't think the relates to preemption standard meets that mark. I would rather see a conflict preemption or something similar to that where the federal government has identified specific things that they do not want to see in the market and they leave it up to states to decide whether or not the federal government has fully protected their interests versus a relates to standard which needs to be very, very broad and it's not clear where-

Cristiano Lima-Strong:

Could you unpack that a little bit just for folks, the relates to?

Joel Thayer:

Sure. So a conflict preemption is what it sounds like. If there is a federal law up there that you ... If law A and law B can't live in the same universe, it's granted by the federal government, assuming that state law B is the federal government on a relates to side, it's far more ambiguous 'cause some will argue that a relates to works a lot like a conflict preemption, which it doesn't. Relates to can mean literally whatever this thing is describing.

So if we're talking about certain preemptions or certain prohibitions against chatbots, and let's say it requires labels, let's say it requires some sort of notification, but there's no age verification mechanism on there. If a state wanted to do an age verification mechanism on chatbots, there is a very good argument that the relates to standard would preempt that because it would basically be saying the federal government has already spoken on this issue of chatbots and they decided that they did not want to go down that route. So the state would therefore be preempted by that because it would ... Well, it works as a giant swaddle up preemption where the state cannot do that because it's basically been taken up by the federal law.

So essentially, the federal government does not have to precisely articulate what they want to regulate. They can basically say, "We want to regulate this and everything else is preempted." That might be a little bit too far, a little too rich for my blood. I think that states do have a say here, and I think ultimately that's how we're going to get some of the things that I think Kate wants and I want too, which is more of a uniform privacy law, or I do believe that there are aspects of state laws that just works better than how the federal law would operate. And so that's the one thing I'm looking at when it comes to these preemption conversations.

I'm not a big fan of the wide swath preemption. I would rather it stay narrow in the conflict zone. But yeah, I'm very interested to hear Kate's view on that as well.

Kate Ruane:

I mean, I genuinely agree with Joel about the preemption question. The thing that I would add, I think we might disagree on the specific things that we would want states to enact, but the thing that I would add that the relates to preemption standard, which Joel described ably would do is it would also preempt the application of laws of general applicability that have existed within the states for a long time to children.

So state consumer protection laws might not apply to children anymore. And that if the federal government says that we're preempting every state law that relates to kids safety, that to me is an even bigger problem. So it's not just like the specific targeted things that states wants to do, it's also the broad, generally applicable, regular tort law that we've all been using forever and ever that would be swept into this as well. And I think that's hugely concerning.

Amina Fazlullah:

Can I just add that if we're thinking of something like AI, but even generally with respect to tech, these are fast moving industries. The products change quickly, the threats evolve over time. So you want to have every cop on the beat. You want to have the state legislators engaging and lifting up AGs identifying, using their existing tools and enforcing based on those existing tools against harms that are emerging. And that all feeds into, I think, a very healthy federal level process so that federal regulators and folks in Congress have a very clear sense of where the gaps actually are and where the best policies actually lie. So you eliminate all of that forward-looking thinking as well.

Cristiano Lima-Strong:

So we certainly could spend more time talking about what the federal standards could look like as I alluded to. Kate and Joel testified on this recently. I'd recommend folks check that out if they're interested in hearing more. And common sense, of course, has been very active on that front. I just wanted to close out giving you all a chance to offer final thoughts on things you're watching out for this year in the child online safety space at the state level and in the interaction with federal government.

Amina Fazlullah:

The first thing I'm going to be keeping an eye out for is bills on moving forward related to generative AI and kids safety, but also broader bills around generative AI and general consumer protection. I think we'll see both of those moving forward. And in those broader bills, I think we'll also see protections for kids.

And then I think we'll also see features-based legislation or warning label type legislation moving through the states. We're going to be keeping an eye out for the advancement of age assurance through whatever mechanism starts to move, whether it's at the state or the federal level.

And then finally, I think keeping a close eye on any potential updates to privacy. As I mentioned before, I think they're really critical in light of AI technologies, and so I'll be interested to see what states do.

Joel Thayer:

I'm actually very encouraged that these conversations are happening and they're still happening. I do think that whether you're a Democrat or Republican, I think everyone really agrees that there is an issue in this market that need to be resolved. And you're going to see a whole swath of different attempts to quell the concerns.

I won't reiterate what Amina just said 'cause I think she stated it very well that we're going to see all of those issues, but I am actually hopeful that we will see something that Kate wants and I want as well, which is a real at bat for privacy. And especially now when you're talking about the sheer amount of data that's going into these AI systems, without a doubt, there's going to be a question as who owns that data, what data can be used, how can it be used? How can it be monetized? How can it not be monetized?

All of those questions are embedded in this larger, broader conversation about our interactions with these tech companies and platforms. And so I don't see this issue going away. I think this issue is probably going to show up in multiple contexts. I don't think it's going to be limited to the child safety space. I think you're going to see it pop up in the competition space. You're going to see it pop up in consumer protection. And it's going to be a ... I just say grab your popcorn people because it's going to get fun and wild.

Kate Ruane:

I love batting clean-up because I get to say, I agree with everything that Joel and Amina have said and reiterate that. But I think Amina did a really good job of running down the various topics that I'm looking for state legislatures to engage with over the next year and then I expect them to do.

I'm also going to look at how court decisions start to come out and how those interact with state policy proposals, how state legislators react to some of what the courts say in order to try to build up their case or change what they're doing in order to respond to some of the concerns.

And the other thing I want to see, I would love to see is for some of the age verification proposals that exist out there to start to really deal with and create safeguards around the privacy issues that exist within requiring age assurance to be used. Because I believe in harm mitigation. I might not think that using or requiring age verification is the right thing in many circumstances. But if you're going to require it, it is very clear that privacy risks are being created and not being sufficiently dealt with within these statutes.

So I would love to see some of the state legislators start to grapple with that and build some of these protections into the statute itself, requiring the companies that are required to do these processes or contract with companies that already do these processes to properly collect and use this data only in certain ways and delete it immediately. We're not seeing enough of that being clearly articulated in these laws.

Joel Thayer:

Kate's preaching to the choir here. There's no secret that I helped design the App Store Accountability Act of Texas or at least was consulted very quickly. One of the things that we really were concerned about was that privacy concern that you described, which is why there's potentially there's a right to delete it or, well, basically there's a requirement to delete all those information and make sure that the privacy stuff cannot go by the wayside.

So just want to make sure that there's another area of agreement that Kate and I actually have, which is that you have to have a privacy focused approach when reviewing statutes or when enacting these statutes. And I too, I'm going to be a keen watcher of what the courts ultimately come out to. So I appreciate Kate bringing that up.

Cristiano Lima-Strong:

So I'll bat clean-up in the end. Thank you all so much for joining us. It’s been a great discussion and I really appreciate it.


Authors

Cristiano Lima-Strong
Cristiano Lima-Strong is a Senior Editor at Tech Policy Press. Previously, he was a tech policy reporter and co-author of The Washington Post's Tech Brief newsletter, focusing on the intersection of tech, politics, and policy. Prior, he served as a tech policy reporter, breaking news reporter, and s...

Related

Perspective
The Age of Age Restrictions Poses Policy Dilemmas for Kids Online SafetyDecember 22, 2025
News
Congress’s Bipartisan Child Online Safety Coalition is UnravelingDecember 2, 2025
Perspective
When Age Assurance Laws Meet ChatbotsSeptember 5, 2025
Transcript
Transcript: House Hearing on Legislative Solutions to Protect Children and Teens OnlineDecember 7, 2025

Topics