Home

Donate

The Gaps Left Unfilled by the Senate Tech CEO Hearing on Child Safety

Alicia Blum-Ross / Feb 1, 2024

The US Senate Judiciary Committee convenes a hearing on child online safety on January 31, 2024.

After yesterday’s US Senate Judiciary Committee hearing featuring the CEOs of Snap, Discord, X, Meta, and TikTok, there is more momentum than ever to pass bipartisan legislation about kids and tech. Yet for all the emphasis on regulation, and on potential accountability through the courts, the hearing left many gaps to fill. As a former child safety lead at Twitch and Google/YouTube, and an academic with two decades of experience researching kids and tech, I know the details matter. For regulation to be effective, there needs to be an agreed definition of what “reasonable measures” (to use the language in the proposed Kids Online Safety Act) for child safety should look like.

In the Senate hearing, Senator Amy Klobuchar (D-MN) made the comparison to the recent grounding of the Boeing 737s, while they were inspected for safety. Checklists were checked, screws were screwed, and the fleet was deemed safe to fly. But no such agreed checklist exists for child safety online. It will take some challenging negotiations to get there, and we can and must, but there’s a number of details to work through first.

When a Boeing plane lost a door in mid-flight several weeks ago, nobody questioned the decision to ground a fleet of over 700 planes. So why aren't we taking the same type of decisive action on the danger of these platforms when we know these kids are dying? -Sen. Amy Klobuchar, D-MN

Within the hearing there were differences in how the platforms characterized feature safety. Take the disappearing messages of Snapchat: such ephemeral content (whether livestream, text, audio, image or video) circulates differently than media that is uploaded to a site like YouTube. The messages can’t be viewed, searched or recommended later, meaning they’re much less likely to ‘go viral’ through an algorithmic feed. This isn’t foolproof, since most kids know how to screenshot, but the design provides natural protections for example if a teen has been pressured into sharing a nude. But also, it’s harder to detect and report abuse - which is why disappearing messages can be a go-to for illegal activities - and ephemerality means more limited evidence for later parent or law enforcement follow up. So disappearing messages are both good or bad for youth safety – effective legislation will require some conclusions and guidance on how they should be monitored and deployed given no guidance currently exists.

There are some clear cut safety practices that most companies can and should commit to. No one knows better than Trust & Safety workers, the professionals at agencies like NCMEC and in law enforcement, about the range of horrible sexual abuse and coercion children can be subject to online. Groups like the Phoenix 11, along with the parents and young people who testified in the opening video clip yesterday, have been instrumental in bringing light to the impact of this abuse. CSAM content is easy to recognize in comparison with other forms of online harm - and therefore the most obvious starting point for “reasonable measures.” Hash-matching has been available for decades, so companies can easily find, report and prevent re-shares of previously-identified CSAM. For many years hash-matching was only interoperable for still images. Now, thanks to industry coordination through the Tech Coalition, in partnership with NCMEC and Thorn, video hashes in multiple formats are made available to all. This means a CSAM video shared on Facebook can be removed almost instantly if it resurfaces on TikTok, even though the platforms use different technologies. Tools, like PhotoDNA and CSAI Match, are available free of charge (from Microsoft and YouTube/Google respectively), so there’s really no excuse for not implementing them if you allow user-generated uploads on your platform.

While this is a safety standard most major companies already implement (although Apple remains a hold-out), as the technology becomes more complex, agreed-upon standards become less clear. Jason Citron, the CEO of Discord, referenced the collaboration between Discord and Thorn to detect potential grooming interactions in chat messages – an AI-based classifier that they will make available to others. AI is essential: no company can detect potential child exploitation across hundreds of languages and billions of interactions using user reports and human moderation alone. But then again, grooming is often indistinguishable from flirting, like a chat thread telling someone how beautiful they look. Terrifying if it’s an adult to a nine year old, fine if it’s between consenting 30-somethings.

AI can’t spot check for context like a moderator can, or determine whether it meets the illegality bar for reporting to NCMEC like a legal specialist does. AI-classifiers also need a lot of data to build, and models based on hundreds of thousands of potentially illegal interactions aren’t easy to develop, nor should they be. There are also more prosaic impediments. Generally, for good privacy-respecting reasons, frontline content moderators do not see user account information including user age. So they can’t compare the content to the declared age on the account, if that was even correct to begin with (which it isn’t in many cases). Companies need guidance on how they should employ privacy-preserving AI models, leverage user age (including potential age-assurance), and establish sustainable staffing models before AI-enabled grooming detection can be considered a reasonable standard.

Another area of unsettled consensus is about the role of parents and parental controls. Four out of the five companies that testified have launched parental controls in recent years. Parental controls can be useful – full disclosure, I worked on YouTube’s tools and use them daily and successfully with my own 10-year-old twins. But parental controls are only one strand needed for weaving a functioning safety net for kids. When asked about how many parents had used the tools since they were introduced, Snap's answer was illuminating. Yes, many parents say they want the tools, but in reality they struggle to set them up, though innovations like TikTok’s QR code family pairing make it easier. But each individual platform has its own tool, so if your kids (like mine) play Roblox and Minecraft and have Google Classroom through school, and in a few years I need to set up their Meta and Snapchat and Discord? I am looking at several more years of being my family’s full-time IT support.

And I (like almost all tech workers) am a terrible reference point. I’m digitally literate, have time and resources and (I hope) my kids’ best interests at heart, and I try to keep a pretty open mind about their tech use. Plenty of kids live in homes with parents who don’t have the time or skills to set controls up or aren’t present at all, or where the family shares a single device (so restrictions are often disabled when access is needed for parents or older siblings), or where the parents themselves are the source of abuse. But even in a best-case scenario of caring families and easy-to-use tools, decades of research has shown that parental controls alone are ineffective at helping children navigate online, and are a shaky sole foundation for a legislative or parenting strategy. This makes them more of a should-have than a must-have for platforms focusing on teens.

Instead of state legislatures hyper-focusing on parental controls, those in charge of determining what reasonable safety standards should look like need to focus instead on the impact of design choices on teen safety. Until the last five years, on most platforms, a 13-year-olds’ experience was virtually indistinguishable from a 33-year-olds. The same features and content could be accessed. A ‘child,’ in tech industry parlance, meant someone under 13 (and therefore generally someone who should be removed from the service). But a 13-year-old doesn’t magically wake up as an adult on their birthday, and very little attention was paid to what changes platforms should make for teenagers - legal users but not exactly grown-ups.

Spurred by regulation and public pressure, but also by tech workers ourselves, especially given hiring of more child safety experts and a workforce newly old enough to have teenagers, the last few years have seen more innovation for teen safety than the whole of the decade before. Meta and YouTube made big changes to whether and how sensitive content is recommended, Discord created teen-tuned safety settings and Snapchat and TikTok altered how teens’ could be contacted in DMs. These are all great examples of how to design for 13-17 year olds. Teens have unique developmental needs and benefit from additional support – not just optional safety resources (even though these are sometimes excellent), but also in-product notices and nudges. A more restrictive default combined with a well-timed forced-choice in the user experience can provide the friction needed for a teen to reconsider a risky post or comment. Age-tuned settings rather than blocking access is far more palatable for older teens than is creating a walled garden they won’t use, leading them to seek out platforms with fewer protections in place.

We need to understand the impact that these changes have had before they are mandated as reasonable safety measures, which may not work for all platforms. The good news is that these products are already ‘in the field’ so we can evaluate and learn from them (if platforms are willing and able to do so candidly). Even better, young people are highly motivated to have their say about online safety, and are crucial for holding us all (tech workers and policymakers) to account as we design standards that put in place better protections without sacrificing access to tools that are deeply valued. While no one-size-fits-all model will work for every platform or every family or every teen, the conversation can’t be simply about whether or not teens should access digital platforms, but what their experience looks like when they do.

Authors

Alicia Blum-Ross
Alicia Blum-Ross, an academic researcher and co-author of Parenting for a Digital Future: How Hopes and Fears About Technology Shape Children’s Lives, previously served as Senior Director for Policy at Twitch, Head of Youth Strategy at YouTube and Public Policy Lead for Kids & Families at Google. Sh...

Topics