Home

Donate
Podcast

Through to Thriving: Centering Young People with Vaishnavi J

Anika Collier Navaroli / Sep 7, 2025

Audio of this conversation is available via your favorite podcast service.

Thanks for joining us for another episode of Through to Thriving, a special podcast series where we are talking to tech policy experts about how to build better futures beyond our current moment. This week I talked to Vaishnavi J, founder and principal of Vyanams Strategies (VYS), a trust and safety advisory firm focusing on youth safety, and former safety leader at Meta, Twitter, and Google.

Vaishnavi and I talked about how her early experience as a Disney Imagineer inspired her desire to create safe, yet magical spaces for young people, the importance of protecting the human rights of children, the debates around recent age verification regulations, and the trade-offs between safety and privacy.

Throughout the conversation, we discussed what Vaishnavi called an “asymmetry” of knowledge across the tech policy community:

Vaishnavi: I think we still fundamentally have a significant asymmetry of expertise when it comes to how technology works. I think most of the folks who are doing great work around product and policy development, engineering, data science research, they sit within private organizations. They do not sit within civil society. They do not sit within government. Yet civil society and government play the role of checks and balances in the system, but how can you truly effectively regulate something if you don't understand how it works?

We also talked about the role of litigation in shaping the landscape of youth safety:

Vaishnavi: I think it's important that litigation doesn't become a cudgel against platforms and that it isn't just used simply to create more sensationalist moments, whether that's an article, a headline, or a gotcha moment for a policymaker or a litigator. That's a real misuse of this incredibly important power that litigation has in the American system. So I'm also really cautious of that. And I think the best way to make sure that that's not the case is to see what remedies are being proposed and how thoughtful those remedies are. I would like to see folks think more thoroughly about what kind of remedies would truly make this a better ecosystem rather than a moment for penalizing a company with a fine, which, if it's a large company, it's a drop in the bucket, and if it's a small one, it could kill them.

We also talked about recent journalism and reporting about content policies for youth safety within Generative AI products:

Vaishnavi: I always think it's interesting, but really incomplete, when we just look at a piece of policy as it is without really understanding how it was going to be enforced and how it was going to be reviewed or scaled. At what point does it get triaged for human review? What point is it automatically enforced against? And especially in the context of chatbots, which are user-to-system interactions, what are the range of remediations possible? For example, using sexual language towards a young child. But what does that mean? Do you just not provide an answer? Do you give a deflection? Do you tell them to go talk to an adult? Do you give them guidance and education? There's a whole spectrum of remediations possible to that one content policy line. And without knowing what this is kind of a very incomplete picture that we get.

Vaishnavi also discussed what she hopes for the future of youth, safety and technology:

Vaishnavi: I hope it helps them be the better, best versions of themselves that they want to be. I hope it doesn't replace their innate desires, goals, ambitions, intellect. I hope that it actually becomes an accelerating function for all of those things. I really hope that at the end of the day, they can find joy from these experiences. With some of the conversation around technology now, it's hard for us to remember that these digital tools are a source of great magic and joy when we first start using them. Somewhere along the way we forget that. I hope that the tools continue to evolve to be safer, more rights-protective, more creative, more innovative, and continue to spark that joy.

Check out the entire conversation with Vaishnavi. Below is a lightly edited transcript of the discussion.

Anika Collier Navaroli:

Hey, y'all. Welcome to another episode of the special podcast series Through to Thriving. I am Anika Collier Navaroli, your host, and a Tech Policy Press fellow. I am talking to some amazing tech policy people to help us explore futures beyond our current moment. And today, I am talking to one of the very best in the business, Vaishnavi J, and we are going to be discussing and talking about technology that centers the youth. Welcome to the podcast, Vaishnavi.

Vaishnavi J:

Thank you, Anika, and I'm really happy to be here.

Anika Collier Navaroli:

I'm so excited to have you here. For full disclosure, Vaishnavi and I used to work together many years ago, doing some of the most fun and amazing work that ever existed, and so I'm very happy to be able to have this conversation with you. I know you, but for folks who don't know you, even though I think you had one of the most read pieces on Tech Policy Press last year, would you mind introducing yourself to our listeners?

Vaishnavi J:

Sure. I'm Vaishnavi, I run VYS, which is a product and policy advisory firm focused on child safety. So we help platforms, civil society, and government build safer, more age-appropriate experiences for children and teens online. Think of us as your fractional product managers and policy experts. Having done this at a variety of previous companies, now I'm hoping to build VYS up into a real center of excellence for youth product and policy design.

Anika Collier Navaroli:

Well, I'm really excited that is what you are working on, because I know how skilled and how amazing you are at this work. But I would love to talk a little bit about how you got into this work in the first place. Child safety and youth safety, of course, is a really hot subject, clearly why we're talking about it, but what drew you in there initially?

Vaishnavi J:

So I've been in trust and safety, the umbrella industry, for a much longer time, probably around 14, 15 years, or so. But my first job coming out of college was actually as a Disney Imagineer. It was so fun, and we'll have to have a whole separate conversation about how to decorate my cube.

Anika Collier Navaroli:

Yes.

Vaishnavi J:

It was great. But it was a really eye-opening experience for me, because I worked at this company, and the Imagineering is the studio that helps design the parks and resorts worldwide. It's actually a really important part of the overall Disney organization. And I worked with some of just the kindest, most intelligent, driven people around, and it was a really good lesson to me in how you can create safe, magical, innovative, creative experiences for children that involve their physical safety, let alone their emotional and mental safety, and still be an incredibly innovative and successful company in this context.

I think that really set the stage for me, in terms of how I think companies can operate and the opportunities there are to be innovative and safe. And so after that, when I moved to Google, I started working on child safety. I became our child safety lead for our central strategy team that was based out of the Asia-Pacific, Middle East, Africa, and Russia, and then that was really child safety and privacy at the time. And then moved over to Twitter, which is where you and I had the wonderful pleasure of working together. First, I worked in APAC, in our Singapore office, and then moved to San Francisco to work on a more global portfolio. I think it goes without saying that there were a lot of really interesting trust and safety, youth safety issues that we had to-

Anika Collier Navaroli:

Just a couple.

Vaishnavi J:

Just one or two, just like a small portion of our work, really. And I left Twitter to go to Instagram to become the head of safety and wellbeing for Instagram, which was an incredible experience, and then after that, my role expanded to become the head of youth policy across the Meta family of apps. And last year, I decided to start VYS as an independent firm. Having worked in these large places for so long, I think I really wanted to see how I could scale some of this expertise that I've built up over the years, to help a really wide range of large, but also medium and small companies, that are building cool products that involve kids.

Anika Collier Navaroli:

Thank you so much for giving us that background. You mentioned the sort of wide arrangement of companies and various folks who are really interested in youth safety, and I think that safety for youth, and safety for children, is broadly in agreement that we should be doing things to keep the young folks in our lives safer, and letting them have better experiences with technology, yet it has become such a divisive issue. And so I would love to talk to you a little bit about this idea that has been proposed, and I'm sure you've heard about this and have seen this, that child safety is so much often like coded language that ends up passing off some sort of policies that are really invasive when it comes to privacy and surveillance. What are your thoughts on that, and why does this end up happening?

Vaishnavi J:

It's such a pity when that happens, because there's so much important work that's there to be done around child safety. And I always talk about child safety and privacy in the same breath, because I do think there are good ways to design product and design your policies, that support both of those things. One of the examples I think about a lot is the push to break end-to-end encryption. This idea that encryption in messaging is going to be a harmful experience for children and teens, when actually, the privacy that encryption offers is a critical right that everyone has, including children and teens. And so there are much more effective ways in which to protect them than breaking the moment at which they're having a conversation. We can, for example, look at how they came into contact with the person to begin with, what we would consider an upstream intervention. So before you even get into that conversation with someone who might potentially harm you, how did that conversation come to be?

Is this person talking to another user that's been flagged for us before? If so, that's a great signal that we can use without having to break encryption. And I think to answer your question on why I think this happens, I think we still fundamentally have a significant asymmetry of expertise when it comes to how technology works.

Anika Collier Navaroli:

Say more.

Vaishnavi J:

Oh, I think most of the folks who are doing great work around product and policy development, engineering, data science research, they sit within private organizations. They do not sit within civil society. They do not sit within government. And yet, it is civil society and government that plays the role of checks and balances in the system. But how can you truly effectively regulate something if you don't understand how it works? And so of course, it's really easy to think, "Ah, I see the word encryption. Encryption is private messages equals danger, let me go do something about that." When anyone who's working in the space can tell you there's a far wider range of remediations we can look at beyond just the actual chat messages themselves. So that, I think, is really unfortunate. We still have the significant asymmetry of knowledge when it comes to how our technology works, and we're seeing that asymmetry just widen and widen, that gap is just widening when it comes to everyone's favorite topic, AI.

Anika Collier Navaroli:

We're definitely going to get into AI, because I have so many different questions for you. I love the way that you say that, that sort of asymmetry of knowledge. I think this is something that I've talked about so much, which is the information and the brain trust that lives within trust and safety, that lives within departments. And I think that's why it's so important that you, and folks are striking out on their own, in order to be able to share this information publicly, and be able to go work with civil society organizations, to be able to go work with governments, and to be able to close that symmetry, that asymmetry that you're talking about. You also mentioned something else that really struck me, which was the critical rights of children.

Bell Hooks writes in one of our books, about how children are some of the most vulnerable people on this planet, and how much we don't respect their human rights. And I just think that is such an amazing place to start from, and to start thinking about youth safety as a critical right to safety. That is just a fascinating thing that I wanted to just draw out there. But going back into this idea of asymmetry of regulation, so we have these folks who don't necessarily know exactly how the technology works, who are coming up with some of these rules and these laws, and these regulations that are not necessarily getting to the point, or getting to the meat underneath. One of these that has come out recently, of course, is UK's Online Safety Act. Talk to me a little bit about that. I've talked with some folks about the implementation, about what's actually going on, what are your thoughts about this new sort of world that we're living in with age verification in this world?

Vaishnavi J:

Yeah, the UK Online Safety Act really came into effect on July 25th, at least the age checks portion of it really came into effect on July 25th, and it's been less than a month, and I have been fascinated by the number of new experts in age assurance. And they spread out all over the internet. I was like, "This is some incredible expertise to have emerged in three short weeks on the internet. This is great."

Anika Collier Navaroli:

Everyone's an expert.

Vaishnavi J:

It's been fascinating to observe. And I actually think some of the debates that have happened around age assurance speak to this asymmetry of knowledge. Because on the side of, say, the regulators, there's certainly this idea that if we simply write a particular piece of guidance, companies will very easily and effectively be able to interpret and scale that guidance. So in last week's issue of our Substack, I posted a piece about how the Ofcom regulations around bullying speech are actually quite broad, and it's very easy to see how a company would have to have then very over-broad restrictions to avoid running afoul of it, so that's definitely there. But on the side of platforms then, you've had the few examples of spoofing and hacks and attacks been seen as, "Ah, age assurance clearly doesn't work. It is largely ineffective," and that actually also is not what the data says.

The data says that age-affected age assurance has been implemented across a wide range of young people and adults. And that yes, there have been hacks and spoofs, but if someone is trying to hack your system, say, I don't know, 500 times, that is not the ordinary use case of a regular teen or adult, so how can we make sure that we're looking at those failures in age assurance in the context of the wider success that it's had. The other piece I think that comes up with age assurance is that it's seen, and very frequently talked about, as a bad when age assurance says "We need to know how old you are." Now based on that knowledge, we will make decisions about what we show you, what we show you with a warning, what we don't show you, but there's a wide range of behaviors that can be impacted by age assurance instead of just "Ah, now we will cut you off from the internet because you are no longer the right age for this product."

And that, I think, has been a really interesting asymmetry of knowledge to think about. If you see some of the articles that came out, a lot of them were talking about now children will not be able to access important information. They're supposed to be able to access that information, and if they're getting cut off from that information, for example, important information about health or identity or sexuality, if they're being cut off from that information, that is a failure of policy development and product development. And I don't think that just platforms are responsible, I think regulators have a role as well, in providing more guidance there.

Anika Collier Navaroli:

You mentioned what we used to call the scale of remediations, which I think is a really interesting way to think about this in this space, right? So it's making my brain go back to back in the day, when we worked at Twitter, where we had the options of keep up and take down, right? That was literally all that we could do. But then we had more tools, and we're able to do different things that were in between keeping something up and taking something down, and so I hear you saying there are more options than just what we're calling a ban, right?

Vaishnavi J:

Exactly. And I think to talk, to get back to that point about the asymmetry of knowledge, or the asymmetry of expertise here, you won't know what the full range of remediations can be, what that full spectrum can look like, if you don't have that expertise in house. And I think this is one of the things that I think civil society struggles with, government struggles with, even smaller companies struggle with, because they don't all necessarily always have the budget for a trust and safety person up front. One of the ways I think we've been particularly helpful to smaller and medium-sized companies is this idea of fractional support, being able to go in, help them get launch-ready, and then move on until they have the budget to have a fuller staff. But I think that asymmetry of knowledge really gets accentuated in these situations.

Anika Collier Navaroli:

So I want to talk a little bit more about this asymmetry of knowledge. We've talked a little bit about regulators, we talked about folks who are working in writing the laws that are governing this. What about these lawsuits that we're seeing, that are coming from lawyers, or coming from law firms? I know there's a couple, there's one in Texas, there's one in Louisiana, there's one in, of course, the Louisiana one that is specifically targeting Roblox, and saying this kind of claim that we've heard a lot, which is that they didn't do enough to stop kids from being exploited. And this claim that the company prioritized growth over profits, sorry, growth and profits over well-being. What do you think about those claims, as well as I would love to hear your inside knowledge about the asymmetry of knowledge that's also happening there?

Vaishnavi J:

It's an interesting question. I think more broadly, the US moves forward, in large part, because of litigation in addition to regulation. And that's a pretty unique dynamic in the US, I think, compared to a lot of other countries and markets. There's two pieces in litigation that are really interesting to me. One is the discovery and what comes out of the finding phase, because I think you learn so much, but I think a lot of attention goes to that. A lot of the articles that you see are written about the gotcha moments, or the bombshells that came out during discovery. There's a lot less attention paid to the other side of litigation, which is, well, what are the remedies? How exactly are you going to remedy this situation? And if the remedy just amounts to a fine and a commitment to best practices in the future, that's not a very valuable role for litigation to play.

I think it's important that litigation doesn't become a cudgel against platforms, and that it isn't just used simply to create more sensationalist moments, whether that's an article, a headline, a gotcha moment for a policymaker or a litigator. That's a real misuse of this incredibly important power that litigation has in the American system, so I'm also really cautious of that. And I think the best way to make sure that's not the case is to see what remedies are being proposed, and how thoughtful those remedies are. I would like to see folks think more thoroughly about what kind of remedies would truly make this a better ecosystem, rather than a moment for penalizing a company with a fine, which if it's a large company, it's a drop in the bucket, and if it's a startup, it could kill them, right? So it's not that meaningful. I think what's more meaningful is what kind of concrete remedies are you thinking about, and that is exactly where you need to have really thoughtful knowledge and understanding of how products work and what would actually make a difference.

Anika Collier Navaroli:

So the insider knowledge that you're still talking about, to be able to come up with those more thoughtful remedies. I have sat in companies, I'm sure you have sat in companies, that have talked about how large a fine is going to be for potentially breaking something, and have put it in the budget as, "Welp, moving right along," right? That's exactly-

Vaishnavi J:

The cost of business.

Anika Collier Navaroli:

Exactly. Literally, the cost of doing business, which we're talking about, is, at times, youth safety. And I think that understanding, and again, I'm going to keep saying this asymmetry of knowledge. I really love the way that you're putting this, which I think, to me, means that there has to be some sort of leveling out of that knowledge, some sort of sharing of that knowledge between the folks who are doing this work, whether it be litigation, whether it be regulation, whether it be sitting inside of companies, so that there is that conversation that's happening that we are getting these thoughtful remedies, as you're saying.

You mentioned something else that I want to get into too. You mentioned discovery, which I think one of my favorite things that ever happens is not necessarily discovery, but is also when a journalist gets ahold of some content policy documents that are working inside of a company.Vaishnavi, I am still a content policy nerd. I don't get to see them as often as I used to anymore, and so whenever they show up, I'm like, "Yes, this is my shit. Let me get in here and see what we are talking about," right? There's been this recent story that Jeff Horowitz wrote about in Reuters, talking about the Meta chatbots, and was talking, got ahold of some of the content policy documents, and I would love to talk a little bit more about content policy in general, what went wrong there, and also a little bit more about content policy for AI. This was my first time actually looking at a policy document that was written specifically for a chatbot, which I found to be actually fascinating.

So let me back up a couple of steps. So folks who have never, they're like, "What is a content policy? What are you over here and nerding out about?" Can you explain to us a little bit, what would a content policy be for, say, a chatbot that exists, not necessarily at Meta, but just any company in general, what would that look like? What are you doing when we're talking about writing content policy?

Vaishnavi J:

So I think more broadly, content policy is essentially the rules of the road for the way a product is supposed to operate. And in its simplest form, it is a document, it has guidelines in there. But content policy is really only one piece of the puzzle, there's also the question of how you create the enforcement guidelines for that policy, how you automate some of those enforcement decisions, how you proactively, if at all, detect this content ahead of time. So content policy exists within this larger trust and safety approach to addressing any type of behavior that might happen on a platform. I know there was this article about Meta, and as a former employee, I have to just be clear that my views are really my own. They don't represent the company, they don't represent Meta. I no longer work there, haven't worked there in a long time. But I think beyond any one individual company, I think it's fascinating to have a set of content policy guidelines just to be available to the public with very little context around-

Anika Collier Navaroli:

Yes, yes, Vaishnavi. Yes.

Vaishnavi J:

Because essentially, what you have here, is a document with guidelines. You have no knowledge of how that is being enforced, how that is being proactively detected, if at all, what the remedies are, what the redirection is, what the refusal to answer is. And at VYS, we've written not one, but several content policies for chatbots, but we've also written the enforcement guidelines. We've written how to scale the implementation of those content guidelines when working with chatbots. In fact, I think we were one of the first companies to put out an age-appropriate AI framework last year, before we had a lot of this material out in the public, and we had very specific instructions that were grounded in content policy, but went beyond just to keep up or take down, to your early point.

Anika Collier Navaroli:

Right.

Vaishnavi J:

And so I always think it's interesting but really incomplete when we just look at a piece of policy as it is, without really understanding how it was going to be enforced, how it was going to be reviewed or scaled. At what point does it get triaged for human review? At what point is it automatically enforced against? And especially in the context of chatbots, which are user-to-system interactions, what are the range of remediations possible? Perhaps it is unacceptable to, for example, use sexual language towards a young child, but what does that mean? Do you just not provide an answer? Do you give a deflection? Do you tell them to go talk to an adult? Do you give them guidance and education? There's a whole spectrum of remediations possible to that one content policy line, and without knowing that, this is a very incomplete picture that we get.

Anika Collier Navaroli:

You mentioned this part around policy documents being released without context, and you heard me be like, "Yes," right? Because I think so many times, I see this happening, where folks, God bless journalists, who end up getting their hands on these documents, but they're not contextualized. I think even in so many of these pieces, they'll end up to an academic or professor, and as the professor myself, love us, I think that we're really great. And also, it's still missing that context. It's still missing the ability to be able to say, "Hey, this is actually standard operating procedure," or "This is veering from this thing." Or the things that you're saying, what we actually need here instead of just the "What is the prompt? What is an example of a response? What is this other thing," we need the scale of remediations. We need to know what is actually happening in here.

I think without having those conversations with folks like you, or folks who are working in this industry, we don't get that information. So one of the questions I have for you here, since we've been talking so much about this, the symmetry, is how do we close that? How do we work better with journalists, or civil society, or governments, in order to be able to have better conversations? Again, since we're all working towards the same goal here, we really want to be able to provide better safety for youth on the internet.

Vaishnavi J:

Yeah. And if I can go back to what you said just a minute ago, one of the risks of having these kind of gotcha moments is that you miss the woods for the trees. You actually look, go barreling down this path of, "Ah, it is time to ban all AI companions." Ah, it is time to do this dramatic thing or that dramatic thing, without quite acknowledging what the real heart of the problem is, and what the actual range of solutions are. One of the results, I think one of the very dangerous results of asymmetries like this, is that you end up with laws that get litigated and debated ad nauseam, that ultimately do not actually create a safer environment for the people they're supposed to be protecting. So I just want to really double-down on that.

Anika Collier Navaroli:

Yeah.

Vaishnavi J:

What we need is thoughtful, data-driven policymaking, data-driven recommendations from civil society. And I think that answers your eventual question, just what do we do about this? We need better data outside of the platforms. For the longest time, we kept expecting platforms to share data, make data more accessible, and there's a variety of really good reasons why that doesn't happen, including legal restrictions on platforms themselves, that there's a lot of things they cannot legally share publicly. And so I think the alternative is we need to have better data sets ourselves. We, as civil society, as policymakers, as advisors, need to have better, more technically-grounded data on how these products are working, and then use that to inform the work that we do. And without that, I think we're going to continue having these asymmetries that lead to pretty unfortunate consequences for children.

Anika Collier Navaroli:

Yeah. Let me ask you a follow-up question on that, which is how do we make that happen? How do we get that data? Companies are not keen to share, as you mentioned.

Vaishnavi J:

Right. And I don't think, sometimes, that you need companies to share that data. So for example, let's take a really simple example of how are teenagers impacted by a certain product? You could ask the companies, and then there would be an enormous amount of back and forth, and ultimately, a lot of legal restrictions on what companies could share, or you could find out yourself. You could invest in R&D, you could actually invest in the data collection work of surveying young people, of getting them together, asking them what their experiences are on these platforms. The thing I talk about a lot at VYS is how red teaming, for example, and we do a lot of red teaming for secondary AI applications, as well as other platforms. Red teaming doesn't require you necessarily to have this data from the platform. You can develop your own data sets.

Anika Collier Navaroli:

Interesting.

Vaishnavi J:

You can develop your own understandings, and then that actually puts you in a much better position to be more thoughtful. It also puts you in a more realistic position to understand what are the true needs of your population. For example, if you are vigorously focused on, for example, one particular type of harm, but it turns out that actually your young people are experiencing a very different type of harm online, that us aging millennials, would scare some people. That's important information to have as you make your regulation, as you think through your civil society recommendations. That's all really important.

Anika Collier Navaroli:

I want to talk a little bit more about surveying young people and how do we make sure that we're also, again, as we're talking about centering the youth, how we're making sure that their voices are definitely centered in this. And going back to the idea of regulation and litigation, let me ask you this, if you had the power to regulate the youth safety issues, let's just say in America, let's not even go farther than the rest of the world, what would you do? What would your thoughtful data-driven approach be?

Vaishnavi J:

Well, I think I would start by actually convening representative samples of young people across the country. Even within the United States, we have an enormous amount of demographic variation, socio-economic, whether it's socio-economic variation, gender, you name it. And I don't think we have a very clear idea. I don't actually think we have a very clear idea of what they're experiencing across the country. Pew does some excellent work every year, on how teens are experiencing the internet, and I think that's a really great start. But if I had the power, I would start there. And then I would also want to very actively co-design with young people. This has been our first summer at VYS having both high school interns as well as grad school summer fellows, and it's been great. I feel so dumb so often, and I love being in rooms where I feel dumb, where I'm like, "Absolutely, that is a really great mitigation."

The Stanford TechX for Good ran a really great program for high schoolers over the summer, that I was able to join. And we ran a product design workshop with high schoolers across the country, asking them what remediations would you suggest? How could these issues be better handled? And they had really thoughtful, nuanced suggestions because these are their lived experiences. If I had the power, I would really invest an enormous amount of funding into doing that. And then I would also invest an enormous amount of funding into working with the trust and safety community that can actively help you make better decisions around product policy design.

Anika Collier Navaroli:

Yeah. I hear you saying you would kind of level out some of the asymmetry that we keep talking about, which I think is a running theme in this conversation, which I kind of love, and I really appreciate you bringing up.

So we've talked a little bit about child safety and AI, I would love to talk a little bit more about there. So I wrote an article, I don't remember when it was, a couple months ago, about Digg coming back. When I was talking, the founders were coming back and saying they were so excited because AI was going to be doing so much of their content moderation and it was going to be so amazing. And I was a skeptic, and I remember reaching out to you, and being like, "Vaishnavi, what's going on here?" And you said to me, "I don't think it's hype." And I remember, I literally went back to my editor, and I was like, "Yo, I talked to somebody I really trust and she says it's not hype, so I think we need to keep figuring this out, because what's going on here?" So talk to me a little bit more about what is going on in the world of AI content moderation, especially as it relates to the youth safety and child safety space.

Vaishnavi J:

So I think AI content moderation is not hype. I think there is an enormous amount of work, especially some of the worst work that we have been subjecting humans to for too long. I think there's a real space for AI-powered content moderation to take over, and a lot of that looks like automating and scaling decisions that we previously needed humans to do. Where I think that AI is going to continue to run into issues is not the frontier lines of harm. So understanding what the next likely issue is going to be, where our next set of adversarial behaviors are going to come from. Particularly in the world of child safety, we know that... I don't like to describe children as vulnerable, I like to say that they have unique developmental needs. Because in some ways they're more vulnerable, and in some ways they're a lot more discerning. They figured out that the royal family had put out a photoshopped image way before any of us did.

Anika Collier Navaroli:

Before the wire services who put it out realized it.

Vaishnavi J:

Before the wire services, they knew, and they were like, "Who thinks this is a real photo?" So I don't like to say that they're vulnerable. I think they have unique needs, and some of those needs make them more vulnerable, some of them actually make them superstars. But when it comes to the next frontier of child safety, there are always going to be bad actors who are going to want to prey on them, whether that's motivated adults, whether it's other children, other teens in the space. Sometimes it's even self-inflicted harm, because folks are having a really tough time offline. Those kinds of behaviors are really challenging to predict ahead of time, and I think that's where we need... I mean, one of the things I love about trust and safety in its original form, is that we are the weirdest group of misfits that you would find.

We've got backgrounds in all sorts of funny majors that our parents told us we should never do because they'll never make you any money, but it's an incredible asset when it comes to understanding the new novel harms that are out there. You need people who can do that work. That's not something at least I have seen AI handle yet. But those two put together, the humans at the front lines who can identify the new novel harms and the AI tools that can scale those insights and implement them across hundreds of millions of users, I think that's an incredibly exciting time for T&S.

Anika Collier Navaroli:

I love that insight, right? Because I think those of us who have worked in trust and safety, and those of us who have worked in technology, have a tendency to not be as hopeful about technology and about the way that things can progress. And so I hear you saying that there is a pathway for this to be, not only useful, but incredibly useful and helpful, and I appreciate talking a little bit about that pathway.

I'd love to talk again about the future, and look towards what we can see coming down the pipeline, not just with AI right now. But as a person who has a lot of little children in my life, a lots of little niblings that I adore, I would love to ask you what sort of advice would you give for me? Let's be honest, you have actually given me this real life advice for people in my life when they have gotten in trouble on the internet. I refer to myself lovingly as Auntie Internet, right? It's like I tell everybody in my life, "If you get in trouble on the internet, just give me a call, right? Don't worry about it. We'll figure it out." And you have definitely gotten a text message from me that someone got in trouble on the internet, what do we-

Vaishnavi J:

I love those text messages. I love those text messages. I think they're great. I'm like, "Ah, here's a question from Anika. I can help. I can be a useful person."

Anika Collier Navaroli:

So you've given me that advice, so would you share that with our listeners? What advice would you give to parents and aunties and uncles, and people who have small children in their life? How would you advise us to help them navigate this technology, especially as we look to the future with these unknown ideas of these frontier technologies that you're talking about?

Vaishnavi J:

My biggest piece of advice, and we run workshops on parental controls and how parents can use them more effectively, and I have this line in the slides that says "The best parental control is a curious connected parent." And what does that mean in practice? And I say parent, it could absolutely apply to auntie or general caregiver. And what that means in practice is having conversations with children about the internet that are not rooted in suspicion or fear or authoritativeness, that are actually rooted in curiosity. And that can look like understanding all the value that it brings to them, the communities that they're building, the great experiences that they're having online, enjoying that with them. Some people will say, "Oh, get into the games and play with them." And sure, if you have all the time, go ahead play with them. But if you don't, you can also just be interested in their interests.

And what that does is that it normalizes conversations about online experiences, that this is not something that's happening away from their, quote-unquote, real life, that this is actually a part of their real life. And then when something happens that is suspicious, or that beggars a deeper look, they're going to come to you and want your advice, because you've been talking about these various parts of the internet for so long. So I think that's my first recommendation. And the second recommendation I would make is recognize that our understanding, as adults, of some of these harms are different from how children understand these harms. A really interesting trend that we see, for example, in the gaming sector, is that young people are increasingly opting to only play with folks that they know offline.

Anika Collier Navaroli:

Interesting.

Vaishnavi J:

Taking #NoNewFriends to a whole new level.

Anika Collier Navaroli:

Whoa, yeah.

Vaishnavi J:

They're like, "I don't want to play with a random stranger, I only want to play with someone I know from class, or who I know from soccer practice. Those are the people I want to play with." And that's a different experience than, I think, maybe even you and I had growing up, where I had all these stranger friends from all over.

Anika Collier Navaroli:

I was in AOL chat rooms talking to strangers like that.

Vaishnavi J:

Absolutely. And I think sometimes we do have that bias. I think it's important to recognize that we have that bias, and that's not necessarily the case. Keep up to date with the actual behavioral patterns of children and teens. One thing VYS is not, and I'm very clear about this, is we are not academics, and we are not civil society. They have an enormous role to play in all this. So the incredible research that comes out of some of these institutions, the types of convenings that civil society holds with young people and with parents, we learn from that when we're doing our product and policy design work. We learn enormously from that. And I would highly recommend others take a look at the work they're doing too, because it's really great.

Anika Collier Navaroli:

Yeah. I appreciate that advice of just being curious, and just being aware and being knowledgeable. One of the things that I do is just sit down and watch the YouTube channels that they're watching, and half the time I'm like, "What the hell are we sitting here watching, and why?" And it's not for me. It's clearly not for me. And recently, Drew Harwell wrote an article in the Washington Post about one of my nibling's favorite YouTubers, and I literally opened the article, and I was like, "Oh, I know this guy, and why do I?" And the article was about how this individual was being paid by the governments of Lithuania and China to show up, and basically it's propaganda, right? And I'm like, "Okay, so the thing that this child is sitting down and watching has a little bit more behind it than what I thought was actually happening."

I would encourage everybody to read the story. One of the things that they said that the government said, was like, "Hey, Russia might try and invade us, and if they do, we want Americans to be able to know that Lithuania exists, and where we exist. And if that means that we need to have IShowSpeed, the YouTuber, come down here and pay him to have a visit, that's what we're going to do." And I just think that is-

Vaishnavi J:

Propaganda.

Anika Collier Navaroli:

Right. I was like, "This is a fascinating tourism travel propaganda that is happening, that I genuinely would've had no idea about, would've never cared, noticed, been impacted by if it wasn't for the fact that I just was like, what are we watching on YouTube today?" I think that it's something that we should take to heart, and I really appreciate that advice. So I'd love to talk a little bit more about the future and what it could potentially look like, especially as we think about centering the youth. You mentioned getting the youth voices involved, right, and doing these sort of surveys. How else would you recommend that folks who are working on youth safety get the voices of youth themselves involved in the building of technology?

Vaishnavi J:

So I think what's missing a lot of times, in engaging youth, is that translation layer between what they want and the technology that exists to build it. We had a piece in Quire about a year ago at this point, I think, where we talked about how you can actually embed youth perspectives at every stage of the product development cycle. Not having your, quote-unquote, council of advisors, or council of teens, and then taking their feedback, thanking them, giving them some swag, and being on your way, truly incorporating what they are saying into every stage of your product development cycle.

So I think some of the ways in which you can do that is involve your trust and safety teams, or involve independent consultants like us, to help you translate some of those messages. If they're saying that they want, for example, a better way to handle bullying, the product solution may not actually be to build on a block tool, it may be that you need to build them a tool that lets them restrict the other person. Which by the way, is what Instagram did a number of years ago, in consultation with teens, recognizing that actually, teens aren't looking to completely cut off the relationship. They still want to maintain the relationship with that person, they just don't want that person up in their business as much. And restrict ended up being this really thoughtful, nuanced tool that the platform built. That's a really good way to think about how can you take what children and teens are telling you they want, and then employing your trust and safety team, or your trust and safety advisors, to be the translation layer into what that means for product.

Anika Collier Navaroli:

As you know, I was in charge of youth safety for a couple of weeks one time on the job, and I, of course, texted you, and was like, "Hey, Vaishnavi, what are we doing? I don't exactly know what's happening here." And one of the things that we ended up doing was having a youth council. And I remember sitting there thinking, "How do we actually implement this?" What would it actually look like if we didn't just, as you said, give them some swag, and say, "Thank you so much, we appreciate you," and now we can say that we talked to the youth about the thing that we did, and we're going to do whatever we want to do anyways and say that it happened. So this question I have for you is what would it actually look like if we allowed the youth to lead, center them? What would this technology actually look like? What sort of features, products, what are we talking about here?

Vaishnavi J:

It's really hard to say what that looks like at big platforms. The title of the piece that we wrote a year, actually more than a year ago, is truly called "Centering Youth Voices in Age Appropriate Design." And the subtitle reads, "How can youth councils avoid the trap of performative engagement?"

Anika Collier Navaroli:

So you have the answers for this one written out there already.

Vaishnavi J:

So actually, what's in that issue is like a really helpful life cycle of product development, where, for example, try and plan the youth council sessions to take place before roadmapping season. What typically happens is that, and you know this, you go into product roadmapping season, you build out your P-0s, P-1s, P-2s for the next six months. You start iterating on what your product is going to look like probably around 50, 60% of the way through, before you're like, "Maybe I should get some youth perspectives in here."

Anika Collier Navaroli:

Yes.

Vaishnavi J:

Actually, instead, plan your youth council sessions for before that season even begins, and use that feedback to inform what you're going to be building. One of the things we've recommended to clients in the past is building circuit breakers into the product development cycle. And these are essentially moments that are predetermined in roadmappings, so saying, for example, "Second week of February we're going to have a circuit breaker." And during that time, you can actually pressure test some of the early ideas with your youth council before they're finalized, and that makes sure that your youth perspectives aren't just an afterthought in the process. They're an active part of your product development before you're, say, close to the finish line.

The other thing that you can really do is think about developing a standardized process across your product workstreams. So we've done this with clients, where we've helped them build out a framework for centering youth perspectives from a product and design point of view. For example, understanding whether your product is upholding some key youth digital rights, like safety, of course, but also privacy and access to information, and have that be a part of every product manager, technical product manager, engineer's scope, so that they can think through as they're building it, whether their product is currently meeting those standards. And keep your youth council updated on how those conversations are going. Especially, and this happens a lot, when you run into a roadblock, where it seems like you're doing something that might be better for youth privacy, but seems to hit a snag on youth safety. What does that trade-off look like? Engage with your youth council, of course, engage with your trust and safety teams and your consultants, but actively surface those tensions well before the product is actually live.

Anika Collier Navaroli:

I appreciate that. One of the last questions I have for you here, is what do you hope for the future of youth with their interactions with technology?

Vaishnavi J:

I hope it helps them be the best versions of themselves that they want to be. I hope it doesn't replace their innate desires, goals, ambitions, intellect. I hope that it actually becomes an accelerating function for all of those things. And I really hope that at the end of the day, they can find joy from these experiences. It's hard for us. I think sometimes, with some of the conversation around technology now, it's hard for us to remember that these digital tools are a source of great magic and joy when we first start using them. Somewhere along the way, we forget that. I hope that the tools continue to evolve to be safer, more rights-protective, more creative, more innovative, and continue to spark that joy.

Anika Collier Navaroli:

That takes us back to the very beginning of this conversation, when you were talking about Disney and the Imagineers, and you were talking about what does it mean to create something that is safe for kids, but that is fun and that is magical. And I hear you saying that when we do that, and we center the youth in doing that, it actually makes technology better for the rest of us, right?

Vaishnavi J:

Absolutely. Yeah, it definitely does.

Anika Collier Navaroli:

Well, Vaishnavi, thank you so much for joining us today, and having this wonderful conversation about centering youth in our building of technology. I really appreciate you joining us, sharing your insight and all of your experience.

Vaishnavi J:

Thank you for having me. This was such a wonderful conversation. Thanks, Anika.

Authors

Anika Collier Navaroli
Anika Collier Navaroli is an award-winning writer, lawyer, and researcher focused on journalism, social media, artificial intelligence, trust and safety, and technology policy. She is currently a Senior Fellow at the Tow Center for Digital Journalism at Columbia University and the McGurn Senior Fell...

Related

Podcast
Through to Thriving: Advocating for Change with Nora BenavidezAugust 3, 2025

Topics