Home

Why is KOSA So Popular and What Does It Mean for Child Safety Regulation in the US?

Vaishnavi J / Mar 20, 2024

Last month, an updated version of the Kids Online Safety Act (KOSA) gained broad bipartisan support in the Senate, with 62 senators now co-sponsoring the bill. It is set to be the first piece of child safety legislation in 25 years to even make it to the Senate floor. While the bill’s merits continue to be hotly debated, there is no denying that it is popular, having gained significant support since it was first introduced two years ago. Why is this the first piece of child safety legislation that has gained significant political and stakeholder support, and what does it suggest about the future of child safety regulation in the United States?

KOSA: A quick overview

The Congressional Research Service provided a helpful summary of KOSA for anyone unfamiliar with its requirements. The most controversial aspects of the bill are:

  • Duty of care: KOSA requires companies to exercise reasonable care when creating or implementing any design feature to prevent and mitigate harms to minors that might arise from that use. It defines these harms as anxiety, depression, eating disorders, substance use disorders, and suicidal behaviors. The Federal Trade Commission (FTC) is tasked with enforcing this duty of care.
  • Design requirements: The bill specifically asks companies to limit design features that may increase the amount of time that minors spend on a platform. Some examples it cites are autoplay (infinite scrolling), rewarding time spent on the platform, push notifications, personalized recommendations, in-game purchases, and appearance-altering filters.
  • Default settings: KOSA requires services to turn on the highest privacy and safety settings by default for minors, and allow them to limit or opt out of features like personalized recommendations.

If the bill were to become law, companies would be required to go through independent, external audits, allow researcher access to platform data assets, and create substantial youth and parental controls to create a safer digital environment. It also asks platforms to address the lack of transparency into the inner workings, policies and measured impacts of their products.

What are some of the arguments for and against KOSA?

Why is it so popular?

KOSA has the support of 29 Republican, 34 Democratic, and 1 Independent senator, a resounding show of support across the aisle.

In an election year, kids and families come first

At the child safety hearings in January, some of the visuals and language the Senators employed seemed geared towards expressing their support for youth in a key election year. In a climate where six in ten teens say that politicians aren’t reflecting their desires and needs, video clips of senators defending the rights of children are bound to be compelling. While we know that older adults – parents and non-parents alike – are concerned about the health of young people, young voters consistently raise mental health support as one of their key concerns and solutions when polled on a range of policy issues. Youth activists and even aspiring Congressional candidates now openly reference their age as young adults being a factor in their concerns around social media usage, being more familiar with recently using it as young people themselves.

The data supports this broad frustration that young voters feel with the lack of product or societal solutions to rapidly evolving technologies that they experienced as youth. The introduction of smartphones increased the amount of time people spend online, as well as the volume of information they consume. As crises like climate change, food security, armed conflict, or economic stagnation become more widespread, young people also consume more information about these crises than they previously did, usually through social media platforms.

This increased time online has led to less time being spent offline or in person with other people. Young people between the ages of 15 and 24 spent less than half the amount of time with their friends as they did two decades ago – going from more than 150 minutes per day in 2003 to less than 70 minutes per day in 2020 (a figure from before the pandemic). Suicide is now the second leading cause of death for teens and young adults, and the number of high schoolers who reported consistently feeling sad or hopeless jumped by 50% from 2011 to 2021.

These data points and other factors make 2024 especially ripe for progress on child safety legislation. President Biden specifically called out the need to address social media’s impact on young people in the last three consecutive State of the Union addresses. The US Surgeon General also issued an advisory in 2023 about the impact of social media on young people, starkly stating that “there is growing evidence that social media use is associated with harm to young people’s mental health.” In parallel, parents, young people, and medical associations have been activating and more vigorously calling for Congress to pass meaningful child safety legislation.

Limited ability of government or civil society to manage these changes

But why is the focus on tech reform? Historically, tech has not been the focus of youth mental health conversations. Despite social media platforms having been around for nearly two decades, most of the conversations around healthy tech usage focused on the work that parents, educators, and young people themselves need to do. This has ranged from developing curricula for schools, parents’ guides for caregivers, and many, many digital literacy or citizenship programs for young people.

Unfortunately, such programs have not been either scalable or quick enough to evolve in scope – not for adults and certainly not for children who demonstrate higher levels of digital literacy than their caregivers or educators. While digital literacy programs talk about the importance of two-factor authentication, teens and their parents fret about deepfake nude images and videos that classmates can easily generate and distribute – something current curricula do not address. Sextortion scams are rising rapidly with limited education for young people on how to handle them. It is unclear whether government institutions can feasibly keep up with the pace of technological change, and resource educators and caregivers with the tools to help children process technological change.

Parents and caregivers (a key voice and electoral bloc), have been left to address most of these rapidly-evolving harms with their children, without the necessary resources or bandwidth to do so. Unlike earlier iterations of the internet, where children used a small number of apps for a wide range of purposes, today they use a smorgasbord of different apps for various purposes. This requires caregivers to stay updated on a large number of apps and their settings, determine how their children are going to engage in each use case, and monitor their children’s activities. Aside from inevitably setting caregivers and children up for conflict, it is also vastly impractical. As frustration mounts, attention has increasingly turned to profitable tech platforms to understand what they are doing to be responsible partners in the ecosystem.

Large platforms have not demonstrated that they’re responsible players

Unfortunately, social media platforms have not used the golden opportunity of the last two decades to sufficiently build out processes and mechanisms around responsible design. Companies could have engaged with third party experts, shared their findings, opened up anonymized datasets to researchers and policymakers, and established independent audit mechanisms with third-party experts.

This is not to suggest that companies have made no progress. Many have taken a number of measures over the years to improve the safety of their experiences for young people, and many of us working within these companies advocated tirelessly to get some of the most important policy and product interventions in place. The scale and pace of these changes, however, has not matched the scale of harms playing out online, until platforms began to feel that regulation was imminent.

It is telling that tech platforms have launched more youth protections in the last four years than in the preceding fifteen years, in large part because of regulation. This has solidified public perception that companies will only act in the best interests of their users when they are required to by regulation. There are several case studies worth investigating further:

  • As the UK Children’s Code (formerly referred to as the Age Appropriate Design Code) was being developed and the UK government started industry consultations, several large platforms in parallel announced measures to default teens into private accounts or experiences. TikTok made the accounts of users aged 13 to 15 private by default and turned off messaging for them. It also tightened controls over how users under 18 can interact with other users and TikTok content. Instagram continued to allow users under 16 to maintain both public and private accounts, but preselected the “private account” option for new early teens signing up.
  • In preparation for the implementation of Europe’s Digital Services Act (DSA), Meta, YouTube, Snap, and TikTok announced that they would no longer let advertisers target teens with ads based on their personal information or activity on their apps. These companies vary in how globally these changes are in effect, but there’s little doubt that the timing of their decisions were driven by regulation.
  • Closer to home, even the specter of regulation or scrutiny has prompted changes from platforms. In the months before the January 31 child safety hearings, nearly every company testifying suddenly had new protections for young people coming into effect. Meta announced that it would start hiding self-harm content from teens in Feed and Stories, even if it’s shared by someone they follow. They also turned off teens’ ability to receive direct messages from anyone they don’t follow or aren’t connected to on Instagram. Snap announced that it would give parents more insights into their teens’ settings, and control whether Snap’s AI chatbots could respond to teens. YouTube announced that it would limit repeated recommendations of videos related to topics like comparing physical features, fitness levels, and social aggression.

What does this mean for the future of child safety in the United States?

If it becomes accepted knowledge that only regulation or the potential for it will compel companies to change their practices to better protect young people, it could lead to a number of concerning developments. The first, ironically, is the swift passage of bad regulation that actively harms children. When we consider the digital rights of children, we need to consider all of their rights, including the right to access information. We have seen legislative proposals in US states like Florida that threatened to cut young people off from the internet or social media entirely, at a stage of their life when they arguably stand to benefit from it the most. This stage of identity formation, relationship development, and learning more about the world is one that would benefit from online communities, resources, and access to information.

Bad regulation also risks stifling the thriving American internet ecosystem, including emerging startups and midsize businesses. When policymakers pass bills to regulate the internet ecosystem, but focus their ire on big social media platforms, they miss out on the potential adverse impacts these bills can have on smaller platforms dedicated to gaming, messaging, commerce, virtual reality, or artificial intelligence. These platforms will arguably have an outsized impact on young people in the coming years, and already have a significant effect on their safety and wellbeing. Yet we might see regulation that reacts only to the potential harms of social media, without thinking about the consequences on these other platforms and apps.

To address this, we need more – and more meaningful – transparency from companies about the ways in which young people use their services, explainable policy and product decisions, and more transparency about the impact (if any) their products have on the safety and wellbeing of young people. We also need a comprehensive, industry-wide push towards product development standards that can be governed by the industry with oversight from policymakers and civil society. And we need more collaborative spaces in which policymakers, industry practitioners, independent researchers, young people, parents, educators, and youth development experts can jointly develop solutions to ensure that we are designing products and experiences that are truly in the best interests of young people.

Related: Read more perspectives on KOSA

Authors

Vaishnavi J
Vaishnavi J is the founder & principal of Vyanams Strategies, advising companies and civil society on how to build safer, more age-appropriate experiences for young people. She is the former head of youth policy at Meta.

Topics