Evaluating Instagram's Promises to Protect Teens
Justin Hendrix / Oct 19, 2025Audio of this conversation is available via your favorite podcast service.
This week, Instagram made two announcements focused on teen safety.
On Tuesday, the company said it’s “revamping Teen Accounts to be guided by PG-13 movie ratings,” meaning that, by default, teens should presumably now see content comparable to what they’d encounter in a PG-13 film. Instagram added that “parents who prefer extra controls can also choose a new, stricter setting,” and that the company will offer “new ways to share feedback, including the ability to report content they think teens shouldn’t see.”
Then, on Friday, Instagram announced plans to roll out new safety features for teenagers using its AI chatbots. The company said the tools, expected early next year, will give parents additional options to manage how their teens interact with Instagram’s AI characters.
Skepticism remains high, given Instagram’s track record on teen safety. Researchers continue to question whether the company has followed through on earlier commitments.
To explore the platform’s past shortcomings—and the questions lawmakers and regulators should be asking—I spoke with two of the authors of a new report that offers a comprehensive assessment of Instagram’s record on protecting teens:
- Laura Edelson, an assistant professor of computer science at Northeastern University and co-director of Cybersecurity for Democracy, and
- Arturo Béjar, the former director of ‘Protect and Care’ at Facebook who has since become a whistleblower and safety advocate.
Edelson and Béjar are two authors of “Teen Accounts, Broken Promises: How Instagram is Failing to Protect Minors.” The report is based on a comprehensive review of teen accounts and safety tools, and includes a range of recommendations to the company and to regulators.
What follows is a lightly edited transcript of the discussion.
Laura Edelson:
I'm Laura Edelson. I'm an assistant professor of computer science at Northeastern University and I co-direct Cybersecurity for Democracy.
Arturo Béjar:
I'm Arturo Béjar, and I am the former director of Protect and Care at Facebook and have become a whistleblower and safety advocate.
Justin Hendrix:
And Laura, can you tell us a little bit about Cybersecurity for Democracy? What it gets up to? What its research agenda is these days?
Laura Edelson:
So we are a nonpartisan computer science research lab and we study a range of things that affect people on the internet. Our two major areas of research are understanding censorship technology and figuring out ways to help users circumvent that, and also understanding harm to users on large social media platforms.
Justin Hendrix:
And Arturo, I might invite you to just say a word about your trajectory since leaving Meta and what you've got up to with your continuing research and advocacy.
Arturo Béjar:
At some point, I started looking into how do these tools actually work, as well as thinking about what are the things that need to be in place to ensure that there's harm reduction, like actual measurable harm reduction. And so part of it is the tools and the other part of it is assessing harm as experienced by the people who use these products. In particular, teens.
Justin Hendrix:
Of course, that is the focus also of this report that we're going to talk about today, which is called "Teen Accounts, Broken Promises: How Instagram is Failing to Protect Minors." This report is brought to you by Arturo, of course, and Laura and their organizations, but also you've got a variety of other partners. Can you just talk a little bit about the genesis of this? How did you bring this group together? Who was involved in the research and drafting?
Arturo Béjar:
Earlier this year, I decided to look into how these tools actually works. I could speak to them accurately when I'm talking to regulators or in different contexts. And when I started testing them, I was surprised to find meaningful issues with most of them very quickly. But I realized that in order to be able to speak about this to the world as a report does, that you needed a group of people to come together and look at the tool and then write the findings so that they were broadly and well interpreted. And so I reached out to Laura in the knowledge that she had a lot of experience with security.
And then I gathered Fairplay, which is one of the advocacy organizations in the US. And then the Molly Rose Foundation, which is one of the key advocacy organizations in the UK. Because I believe that the only way that this is going to get better is through some form of regulation. And so I wanted to help move that conversation forward.
Justin Hendrix:
You point out that in September, 2024, Meta announced it was introducing Instagram teen accounts, that all teenagers would be automatically enrolled in the program. And you want to go through and review all of the different teen account safety tools that are listed on Meta's website, the tools that both preceded the announcement of the teen accounts, others that were introduced when teen accounts was launched. Tell us about the methodology for this. How did you go about assessing the safety features, enumerating them, and then rating them?
Laura Edelson:
We were hoping to develop a methodology that would not just serve this report and would allow us to understand which things were broken and which tools were not broken. We wanted a methodology that would allow us to move on from that stage and start to think a little bit more holistically about environments of tools, one, and also to think about how to make better tools in the long term. And so we came up with these different categories that we thought would be useful for understanding the effect that tools had on users and on the larger security environment on a social media platform.
So the first is just the user target. So this first dimension slices up the kinds of safety risks that can exist to say, is this managing a risk that is specific to one user, or is this about an interpersonal risk, a risk between users? So this is just getting at a category of risk that exists on a platform. The next dimension is trying to answer the question of like, "Okay. Well, given any kind of risk that might exist, is this tool trying to prevent harm from happening, or is it trying to mitigate or reduce harm after it's happened?" And the reason this is important is ideally, we would have a mix of tools, some that try to prevent bad things from even happening, and others that, acknowledging that no tool is going to be perfect, tries to limit harm once it's occurred.
Next, we have the safety scope of a particular tool. So again, this is an area where ideally you'd want a mix of tools. So you might design some tools just to be specific to that one user who is turning on a tool. And this can be something like letting users turn on settings that will stop other users from messaging them in specific circumstances. But you could also design community tools that might look at overall patterns in the network, that might say, "A lot of users have reported an interaction with this one particular other user. Maybe we should look at that user's pattern of behavior to try to understand if they pose a risk to the larger community," for example.
Next, we wanted to categorize the actual risk that the tool is trying to mitigate. So there are some existing frameworks for risk. They're often called the four Cs. And these breakdown is a risk about content. So maybe the content is spammy, or it contains graphic violence. Is this risk about contact between two users? Is this risk that is specific to conduct of one particular user, like maybe bullying, for example? I know that's also a harmful interaction, but you could also have built a tool that detects that direct conduct. There's commercial exploitation. These would be things like any kind of manipulative ad or a scam, things like that.
And then we defined actually two more of these risk categories that we saw that tools were aimed at. One was compulsivity. So clearly, some tools see user overuse as a risk they're trying to mitigate or manage. And so these are tools like, take a break reminders, go to bed reminders. These are aimed at this kind of risk. And then the last one is circulation. So this one gets a little bit Meta, but there are certain kinds of content that are absolutely fine when they're shared in the original context the user intends. So for example, people who are on a gymnastics squad or a cheerleading squad might share images or videos inside a group that was specific to that topic interest, and it would be fine in that context. But they might be really not fine with that content circulating in another community without their knowledge. And so this is that last circulation risk that we defined.
And now moving on to another category that we categorize tools along them. We also categorize tools by their implementation style. So some tools are always on. They're enabled automatically. They're default on. And a user doesn't have to take any additional steps. Some tools are prompted where the user is told, "Hey, you might want to turn on this tool." But a user still has to opt into it. Other tools are available and a user can go and turn them on if they want to, but they're not on by default. And then finally, there's a last category of tool that not only does the user have to turn them on, but they have to do some other configuration step. And as an example of this last one, there are some tools that allow a user to block specific keywords from their comments, but the user has to define all the words that they want to block.
Arturo Béjar:
One of the key insights that led to the report was applying red team testing to safety tools. And so these tools are going to be given to teenagers. So it's very reasonable to expect that you could give them to somebody to figure out whether they're effective and whether they can either be accidentally circumvented or trivially circumvented, and how effective it might be in the hands of young people. And so it is that application of safety scenarios or red team testing to safety tools that we hadn't seen done before anywhere else and we would love for other people to do, because we expect safety testing of cars, of food, of toys, of almost everything in society that's done independently from a source that you can rely on in order to know that their safety mechanisms are effective.
And in this case, what we found is a car with like 50 airbags, and the ones around the driver don't fire when the car hits the wall. And so it's very important to have this kind of testing.
Justin Hendrix:
Well, just to maybe draw out that metaphor a little bit, I mean you have this stoplight rating for these various safety features and tools. And I guess the headline from this report is that you rate 64% of them red. You say that that's because they were either no longer available or ineffective. 19% of the safety tools, that's nine tools, provide reduced harm, but with limitations. Only 17% of the safety features, that's eight tools, worked as advertised with no limitations. So I don't know. Explain just a little bit about this to me. I mean, I'm surprised to hear there are so many tools that essentially have been sunsetted, no longer available, or ineffective. Within that category of what's rated red, how does it break out? What's simply no longer there? What did you rate as completely ineffective?
Arturo Béjar:
Yeah. So I'm going to give some examples. And we can talk about pretty much any of the risk categories that we covered and I can give examples on those. But a simple one which covered many tools is the inappropriate contact and conduct. You have these things that say, "We know when somebody's trying to send you an aggressive message, so we're either going to hide it or we're going to block it. We're going to tell the person to be nicer." There's I think eight or nine press releases about features that have to do with that. Now, the test comment that I began with was, "You are a whore. Kill yourself now."
And you'd really think that comment... And my intent in playing that is like, surely, that is the worst thing I can think of that would make all of the systems trigger. And none of the systems triggered. So the comment went through. The person received it. There was no warnings. It didn't get hidden, and anything else like that happened. And then the other thing that we found when looking at those is that those kinds of tools tend to be based on blacklists. And so if you just paraphrase a little bit, it'll just get through. Again, no warning. And so it's really not effective because it can be trivially circumvented.
Another good example has to do with their search safety features. So they've promised that you won't be able to find self-harm, eating disorder, or these other kinds of content. And we went into search and we just started typing like I want to be... And then autocomplete would give you examples of things to search for. That was, I want to be thin with skinny thighs. I mean, you name examples that came from the wrong press releases. And their own product design for search recommendations jumped over the safety setting and landed you in content. Then that got recommended to you in other surfaces once you have been exposed to it.
So if you ended up by accident in eating disorder content, that eating disorder content would start getting recommended in reels, home, search surfaces. You would get accounts that are promoting it recommended to you. And so that's an example of tools that were ineffective. The last one, which is totally missing, was take a break, which they spent many years talking about as doing innovations to encourage kids to step away from the application. They had creators do stuff. Setting was gone. Also Daisy, which was supposed to reduce the visibility of views, which is acknowledged by internal studies as being a mechanism that increased compulsivity for people and made them feel worse about their content, they changed that so that you can only hide likes or shares and they didn't tell anybody. So that's an example of the things that are red.
Justin Hendrix:
But when I ask a question about attention you raise in the report between design and content, because some of the listener might be hearing what you're saying and thinking to themselves, "This is a content moderation issue. This is a failure to be able to appropriately recognize harmful content and remove it." But you say the problem's more in design and product design. Why is that distinction important here?
Laura Edelson:
The reason this distinction is so important is that Meta, Instagram, have made representations to their users specifically about the safety tools that they've built. And this is why it's really important to understand, do those tools work? Have they accurately represented how their product functions to their users and to the parents of their users? And I think this is where I am really excited about this approach that we took with Arturo and that I'm really excited about for the future, to take these security testing methodologies like red teaming that we know work from other areas of security and apply them to these questions of user safety.
So I think it is just a very reasonable question for a parent to say, "Hey, Meta has represented that this tool on Instagram works in this way. Does it actually?" I think that is the question that most parents are asking. They aren't asking a question about, "Is my kid going to see graphic violence? Is my kid going to be hounded by bullies? Is my kid going to be extorted?" That's not the question they're asking. The question they're asking is, "Meta has said that they're going to follow some guideline or they've implemented this tool. Is it there? And does it work?" That is, I think, the thing that actually matters.
Arturo Béjar:
Every issue that we raised, there's no changes to content moderation needed. Every recommendation has nothing to do with content moderation. It's about, does this tool do what they say they do? And do they protect against the risks that they acknowledge parents are worried about? Which is, is my kid going to experience inappropriate contact or conduct? And there's a set of tools for that. Is their time well spent? Are there going to be compulsive use? There's tools for that. So for each of these categories, we looked at that from the perspective. And what we found were things that were product design issues.
Go Google today and spend an hour trying to search for suicide stuff. Just be really creative with your search queries. As Jeff Allen from the Integrity Institute did a wonderful report on few months ago, you cannot find it because they have implemented search protections for suicide content, which you can totally do. That's not the case with the way Instagram is implemented. You can accidentally end up in a fire host of that kind of content, which, in a moment of vulnerability, can lead to tragic consequences.
Justin Hendrix:
Let's talk a little bit more about this issue of design failure. Laura, I know you've been thinking about that in terms of accountability, in terms of regulation, in terms of what role government might play in potentially exerting pressure on companies to make sure that the safety features they promise are in fact delivering on that promise. What are the implications of this from a regulatory perspective, this focus on design versus thinking about the content harms?
Laura Edelson:
Look, I think this is a pretty classic consumer protection issue. I think that Meta has entire advertising campaigns aimed at parents. And my goodness, do they seem to roll out a new one every 18 months, telling parents that this is the time that they finally made the product safe for their kids. And I'm a parent. I hope for the day that Meta makes their product safe for kids. But my goodness, I mean, how many times do parents have to be Charlie Brown while Lucy holds the football? And it feels at a certain point like we in the research community have a responsibility to verify the claims that platforms make about their products, especially when they have misrepresented how their products work, critical safety aspects of their products. They have misrepresented how they work to the public so many times.
And again, as a parent, boy do I hope that this is the time that they have gotten it right. I really do. Because the reality is that social media has become such an integral part of modern life. We need to figure out how to make these products safer for all users, obviously, but particularly teens. And that is the world I am trying to build to and that is the world I think... I think that so many aspects of our regulatory system are trying to figure out how to think about this, but are probably waiting for us in the research community to keep doing more work to demonstrate what safer social media look like. And I'm trying to do that work.
I think that, certainly, it's a relevant question in the courts. Because again, platforms have made material misrepresentations to consumers. And I think this is going to play out in a lot of different venues.
Justin Hendrix:
One of those announcements that Meta has made around teen safety happened this week. We saw news that Instagram said it would overhaul its approach to teen accounts, various changes that it was going to make, including restrictions on, well, age gating effectively, restrictions on search, and presumably additional protections around what types of accounts can communicate with one another. Did you look at that new announcement this week and see any of your recommendations represented in it? How much confidence, Arturo, do you have that these new changes to Instagram teen accounts will address some of your concerns?
Arturo Béjar:
There's a couple of questions there. The first one is, I'm not sure what those changes mean in terms of the risks that Meta themselves have highlighted. So having spent a meaningful amount of time looking at eating disorder content, for example, I mean that all seems to be PG-13 to me. It's about people talking about their bodies, unreasonable representations of things, recipes and things like that. And so what I wish Meta had announced was, "We acknowledge that there's issues here and we're doubling down and we're going to get independent measure of how effective we are at reducing these harms, of the harms that we already know about."
Now, the other thing that I've noticed is that they tend to do these announcements when things are pretty bad in the press cycle. I know because I was on the inside when this happened a number of times and I was part of the team that would go look at something for a couple of weeks. And what happens is you make the announcement and then you move on to something else. And I think that's how you end up with this graveyard of safety tools that don't seem to be maintained as they ought to be maintained given the role that they play. And so I really hope that they do implement it. Because like Laura, I believe in social media, if I could dedicate most of my life to it. And if I'm doing this, it's because my goal is to get these products to be safe for teenagers so that as a parent, you can have peace of mind when you hand over the phone to your kid and you say, "Yeah, you can sign up for Instagram." I said that with my daughter.
And then the other area is they talk about age verification and age gating. You know that Mark and Adam Mosseri had testified to Congress saying, "We do not allow under 13-year-olds." And I don't think they know what the word allow means. Because every time I've done this testing for the last two years, I found significant cohorts of kids talking about how old they are, in a way that would be very easy to detect and then have a proportionate intervention to make sure that they are indeed of the right age to be in the product. And so I hope they do a good job of that. I really do. But we really need to see evidence of that.
Laura Edelson:
The other thing that I would add to this is having looked over what Meta has said they're going to do. A lot of the things that they're proposing, they're about content, thing one. And thing two, they don't say anything about implementation or efficacy. And so they pre-suppose that the changes that they're going to make are going to work and they don't tell users how they can have any confidence in that. And I think, really frankly, again, as a parent, I would much prefer Meta give me some kind of guarantee that the tools it has now are going to get better and maybe work a little bit better before they start rolling out new tools or think about new areas of content.
It's like a car maker whose car repeatedly, the airbags don't deploy, and instead of coming out with a car where the airbags work, they say, "Hey, well, this time, we've added some new crumple zone." It's like, "Well, sure. But what about all the other safety problems you don't appear to have... You're not telling me that you've fixed those. And if all your other safety systems appear to not work well, I don't necessarily care that you've introduced a new one."
Justin Hendrix:
It seems like to me, it's almost like I can say I don't allow teenagers to drink in my house. And that might be a fine thing to say. But if the teens are in the backyard doing keg stands, then I perhaps haven't really put down that rule in a sufficient way. Is that kind of what you're saying, Arturo?
Arturo Béjar:
Yeah. I mean, this is like, you're saying this and it's getting recorded. And you're telling every parent in the neighborhood, "It's absolutely fine for your kids to come over to my house because I promise you that they won't be able to get alcohol," while you're looking out the window, into the yard, to see them doing keg stands. Maybe somebody's throwing up. And instead of going downstairs and addressing the issue, you close the blinds. Because that's the other thing that's really happening here is, as far as I can tell, there's really no meaningful transparency about how effective these tools are.
And the question I wish every reporter in the world would ask them whenever they do a press release like this recent one, it's like, "By what measure? By what measure does this reduce what kind of harm?" This is a company that does everything with metrics. There's nothing that they don't do that's not measured to the teen in terms of user behavior, impact response, how they relate to it, except when it comes to safety. When it comes to safety, it's about the number of claims that they make, which, as we can tell, are not solid claims, and rather than being in terms of the number of kids that don't experience harm. So I think that parents should really look at it really skeptically until you know that there's no drinking happening in the backyard.
Justin Hendrix:
I want to look at one particular finding. Because like Laura said, I think most parents know, "There are bad things on the internet. If my child uses social media, uses the internet, they could potentially encounter content that could be harmful to them. They could search for things that they ought not search for." We know those things on some level. But I think in many people's minds, or at least in my mind is a worst case scenario as a parent, is someone trying to communicate with my teen or my child. You address messaging restrictions in particular as part of your key findings. You talk about the fact that Meta has made some progress in this area. But I don't know, where are we at right now with regard to messaging restrictions? How effective are those safety measures and what more needs to be done?
Arturo Béjar:
Yeah. So on the plus side, there are restrictions in adult being able to contact minors. When we first tested it, there were some issues there. But they've gone and fixed that, which is very good. But on the flip side, their own recommendation engine... I have one of my test accounts for teenagers has basically nonstop recommendations from many in other parts of the world. This is for a 14-year-old girl. And once you hit follow on those, you open up the door for messaging between that. And again, it's facilitated by the product. And they will also... The other thing that we noticed is, for any teens that set up any kind of public account, what happens in the comments in terms of strangers coming in and making inappropriate advances? So some pretty grim stuff is all happening there.
And so, as a parent, I think that you don't have good visibility about who your kid is talking to and then what the nature of the conversations is. And then the other gap which was one that I raised to Mark Zuckerberg, gosh, six years ago now, or five years ago now, and then it's still been the case, is that if your kid experiences something uncomfortable, they don't have a way to let Meta know that that's happened. And if you don't have that button that says, "I'm experiencing inappropriate conduct" that's really easy to use, then how are you going to measure and reduce the harm?
And so I think it's basically, there are some protections, but the kids are exposed in terms of talking to strangers. And then if something does happen, which they might not be comfortable talking to their parents about, they have no recourse because of the way the product is designed.
Justin Hendrix:
You also address sensitive content. That's another area of concern. This one's one where I would expect there to be perhaps more controls, more controls that the parent might even want to engage with. Where do they need to go on sensitive content? And to what extent is this something that parents need to be aware of in terms of how to operate or use these particular settings?
Arturo Béjar:
So from a parent's perspective, I think what you need to know is the importance of talking to your kid about what this content, this sensitive content really looks like. Because most people don't understand. And so the kind of content that has to do with, for example, eating disorder is not these extreme examples we might think about. It's this content stream of being skinny. Here's some ways to lose weight. Here's what I'm eating today. I'm feeling self-conscious about my body. And what happens there is the volume of that content to somebody who's in a moment of vulnerability.
Let's say you've been through a breakup. What meta should do... Which they do very effectively with spam. And I know how these systems work. And you can create systems that don't recommend certain classes of content, is that meta could implement something for the classes of content that they say they don't recommend, they actually don't recommend or it's hard to find, while not being a moderation issue. Because if people are able to post something because of the terms of services should be able to post it. It just doesn't mean you should recommend it to a 13-year-old.
But going back to the parents, what parents need to know is talk to your kids saying, "Look, you're going to run into really sketch stuff, and you have to trust your own feelings about whether something's a little wrong. You have to notice if you start falling into a rabbit hole where you're feeling pretty sad and then you find contents that make you feel sadder. And the most important thing to watch out for is if you're feeling alone." Because the design of these products, and by their nature, when something distressing happens, you're going to feel alone. And it's when that alone feeling grows over a long period of time that it can really lead to the worst things.
And so your kid should know, "The way these things are, you're going to run into something you might feel bad about your body, you might feel alone. It's never the case. Talk to somebody you trust about it. Talk to me about it. It'll always be safe to talk about it." And those conversations are really necessary right now because the product as designed is not safe enough.
Laura Edelson:
I am really glad that Arturo has such a good answer to that question because I hate answering it. It's a question we get asked a lot. And the reason I am so frustrated with this question of, "Well, what should parents do?" is I still think it is just like asking people, "Can you give me some advice about what you should do in a plane crash?" It shouldn't be your responsibility to figure out what to do. It should be people who make planes. It should be their job to figure out how to make safer planes, how to make safer systems. And I think Arturo has really good advice for how parents can navigate what is currently a very difficult situation.
But the reality is that it's just not inside the scope of what is possible for parents to protect their kids in the current environment. And this is why companies need to start treating these risks to users the same way that they treat other kinds of security risks. As Arturo knows and as he has talked about, platforms take things like ad fraud very seriously. They measure those outcomes carefully. They have community focused security systems that take a holistic view at measuring click fraud and ad fraud and stamp it out because that kind of security threat is a direct threat to their bottom line.
And until they start treating user safety risks as the same kind of security threat... Which it is. These are threats that are community threats. They're threats to the entire population of... There's certain content that poses a community threat to all of the teen girls in the United States on the platform. That's a community threat. Obviously it's more severe based on certain characteristics that some portion of that population has, but there's some content that poses community threats. And frankly, Meta has an obligation to their users, maybe not a legal obligation, but certainly a moral obligation, to use the power that they have to address those security threats from a community perspective.
Justin Hendrix:
I want to touch on another topic that's big in the news these days around social media. And of course, we've seen various laws implemented across the United States and elsewhere in the world around age verification. And in age assurance, you address this directly. In a way, I think you don't necessarily wade into some of the larger debates over this question. You look at it very much, as you say, through this lens of what has Meta said it's going to do versus what has it done. But can you just give us a little sense of where we stand right now in terms of what Meta has claimed it will do with regard to age gating or age verification, and where we're at? And I assume this could change based on this week's announcement on some level.
Arturo Béjar:
When we were doing the testing and we found something, we recorded the findings. And part of it is really understanding how harm plays out in the product, which is the ground truth. And so screen recordings and screenshots capture how the product is behaving that day. And in some of the things that we found where, for example, videos, this girl must be seven or eight years old, and she started doing dancing trends. And at some point, she saw a video by another girl where she's asking to be rated. The audio of the video says, "Put a red heart if you think I'm cute, put a yellow heart if you think I'm fine, and put a blue heart if you think I'm ugly."
That other video by this 10-year-old girl, who's older, got one million views and 50,000 comments, most of them calling her ugly. One of them being, "Hey, I'm 62, when do you have time?" surrounded by red hearts. And so that's a good example of product design amplifying. Because, I don't know, how many views do you think that video got? Oh, I can't even imagine. One million. Right? And this is an example of inappropriate amplification, that circulation risk that Laura was talking about. So this other 8-year-old girl copied that video. She lifts her shirt up a little bit to show her belly. And where every other video she made has a thousand views, that video has 250,000 views, including comments from adults.
And so when you think about this kind of harm playing out, it's not about CSAM, which is incredibly important to deal with, it's about Instagram by its design hosts CSAM adjacent content that then creates an environment where CSAM can happen in large numbers because you have the pipeline that develops it. And when Instagram is recommending young girls videos to copy, that then if they do that risky behavior, those videos get rewarded with distribution. Then it's the product design that becomes the groomer, right? And so this is an example of things we learned watching the videos.
Something else we found is the kind of comments that people make attacking each other when you just look at posts that get wider distribution. And then other things that we've found is the nature of, for example, self harm content. You think it's going to be explicit descriptions of how you take your life. Whereas what you find is a wall of black content and poems about how the world is better off without you. And you have to take this into account where, given that Instagram's efficiency and TikTok's and other platforms, I'd recommend in content that you linger on, for a teenager can turn into a fire host of thousands of pieces of content that individually are not bad, but collectively have an impact.
Justin Hendrix:
This report is written in a way that appears ready, straight to be adopted by a regulator who might be running an inquiry, might be about to host a hearing. You provide a bunch of really detailed questions that you hope that regulators will put to Meta in different contexts. As you step back from the report, you think about its conclusions. What do you think are some of the most important unanswered questions that you'd like to see put to the company perhaps at the next opportunity should Congress ever host hearings again?
Arturo Béjar:
How do you measure the effectiveness of any of these tools? Tell us when is the last time you did a comprehensive study of harm as experienced by users? Users are the ground truth. A kid knows if they received an unwanted sexual advance. Content doesn't matter that much. That's what your job is to prevent, in a way that's proportionate and thoughtful. Talk to us about how effectively your reporting mechanisms are. Give us statistics of how many people enter, and then how many people end up submitting issues. And the people who submit issues, how many of them did you act on? And do that person actually get help with issue they were dealing with?
Those are the questions I've been asking over and over and over. Because that's how a company should manage these issues. What do you say about a company that is aware of the harms that their users are experiencing, and rather than investigate it and mitigate it, instead chooses to look away and make it difficult for other people to investigate and mitigate. And what we need is this kind of independent oversight of these safety programs and this real meaningful transparency that, as a parent, gives them peace of mind.
Laura Edelson:
I would want platforms to answer questions about their strategy for how they're going to take these metrics and deliver meaningfully safer systems and experiences for users. In any kind of cybersecurity environment, we measure harm because we want to reduce it. So I want all of these metrics. And then I also want to know, well, what's the plan for how you are going to drive these numbers down? Because ultimately, that's the thing that matters is we need to get to a place where parents can have some kind of sense that if their kid has to use Instagram, because that's where the Model UN club organizes, that they are not opening the door to all kinds of harm from low level graphic violence to their child being exploited.
And that's what I want to know. What is the plan to make things better and how are you going to inform the public about how you are either on track with that plan or not?
Justin Hendrix:
You, in this pointing out, that a lot of this, despite what regulations are now in place around the world, comes down to choice, to the company's choices about how it chooses to comport itself. And I suppose on some level, given the way that meta is governed, it comes down to one man's choice. Arturo, you have been in touch with that man and worked for him in the past. Anything you would say to him? If he were to listen to this podcast, what would you hope he'd take away?
Arturo Béjar:
I have been in the room when Mark Zuckerberg says, "This stops here. Tell me what you need. And then I'm going to be talking to you once a week until it is done." His attention means that mountains move within the company. I've been through that probably like 10 times in the time that I was there. If Mark woke up tomorrow and said, "I want to create a product that is truly safe for teens..." Proportionately so. Because everything we're talking about is reasonable. Teens should be able to get in trouble. Bad stuff should happen. They get exposed to things. Part of it is giving them the tools to navigate that.
And so we're not advocating for a perfectly safe product. We're advocating for a car that is safe enough so you can give the keys to your kid and you know that if they hit something, the airbags are going to go off. If Mark woke up tomorrow and said, "I want Instagram teens to be such that there's no inappropriate contact, no teens get unwanted sexual advances. Really, it's not the place where they're going to find eating disorder or self-harm content. You take work through these categories." It would take the company six months to a year to end up at a product that then the industry would follow, right?
Because what MEDA does, for good and for bad, most of the people just follow because they set the power for the industry. So that's what I wish Mark would do. I wish Mark would make a genuine earnest commitment and then say, "And we're going to do this in the most transparent way. We're going to be working with the best people we can find in other parts of the world to address these issues. And we're going to publish our findings. We're going to publish our metrics regularly so that you as parents know how well we're doing at making this better."
Justin Hendrix:
If he is listening, or perhaps if others who are curious about these recommendations and these questions for regulators and for others in positions to hold the company to account, can ask. They can find them in "Teen Accounts, Broken Promises: How Instagram is Failing to Protect Minors." It's a report by Arturo Béjar, and Cybersecurity for Democracy, Fairplay, Molly Rose Foundation, ParentsSOS, and supported by Heat Initiative. Laura, Arturo, thank you very much.
Arturo Béjar:
Thank you, Justin.
Laura Edelson:
Thank you, Justin.
Authors


