Minding the Legislative Gap: How State Legislators Can Make the Internet Safer for Children
Amina Fazlullah, Ashley Leon / Jan 10, 2025Children use technology in nearly every aspect of their daily lives. At Common Sense Media, our research found that children spend an average of four and a half hours daily on their smartphones. While some kids spend just minutes on their phones, others average more than 16 hours a day, and that’s not accounting for other types of screen use, like laptops, video game consoles, and televisions.
Technology has enabled children to communicate with family, participate in their communities, and contribute to youth culture. However, the spaces accessed through technology were not developed for children, and they are not devoid of harm. Throughout 2024, Senate hearings and lawsuits exposed that popular applications were facilitating sex trafficking, addicting children, and encouraging children to engage in dangerous and life-threatening activities.
At the federal level, lawmakers have moved too slowly to keep pace with technology and have not passed any significant legislation to protect kids online since 1998. While there was strong support for federal legislation during the 118th Congress, it remains to be seen whether the incoming Trump Administration and the new Congress will prioritize legislation to protect kids’ safety and privacy online. The need to protect kids online cannot be overstated, however, and fortunately, states have stepped in to try to close this gap and are expected to remain active this coming year in the policy field.
Kids Online Safety
States are breaking new ground by seeking to address addictive features in technology. For example, signed into law in September of 2024, California’s Protecting Our Kids from Social Media Addiction Act (SB 976) requires covered platforms to obtain verified parental consent before providing addictive feeds to child users. The law also requires covered platforms to mute notifications during the school day and in the overnight hours. Although a lawsuit from NetChoice has enjoined the provisions limiting notifications, several provisions providing safety measures for minors remain in place.
The language of SB 976 innovatively targets harmful design features that make these platforms more addictive rather than seeking to restrict content. Additionally, the governor of New York signed a similar bill, the NY SAFE for Kids Act, into law in June of 2024. We believe that a features-based approach to legislation, as demonstrated in these laws, is essential to curbing the harmful features of social media, such as endless scrolling, autoplay, algorithmic feeds, push notifications, ephemeral content, likes, and comments, which the U.S. Surgeon General highlighted in an advisory on Social Media and Youth Mental Health. In the advisory and an opinion piece for the New York Times, the Surgeon General notes that young people are currently experiencing a mental health crisis exacerbated by their exposure to the harmful features of social media.
The Surgeon General has also called for Congress to pass a law mandating social media warning labels visible to users on platforms. This presents an additional opportunity for states to take action. California has already introduced legislation to implement such a warning label. While a federal legislative proposal on a warning label for child users of social media still awaits action, states have the opportunity to enact features-focused legislation that would follow the example of tobacco, toxic chemicals, online gambling, and video games to educate consumers about the harms associated with addictive features on online platforms.
Some states are also considering legislation that is similar to the Maryland Kids Code that took effect in Maryland on October 1, 2024. This legislation requires platforms to design online products and services likely to be accessed by kids and teens with their wellbeing and safety in mind, that platforms use the strictest privacy settings by default, and that they provide clear and digestible privacy notices and community standards. At least one introduced state bill is seeking to impose higher financial penalties on social media platforms that are found in court to have caused harm to children, based on existing state negligence law.
Finally, to enhance privacy guardrails for kids, states have taken a proactive approach by enacting laws to expand existing comprehensive laws to include child privacy protections or by passing new comprehensive privacy laws, including such protections. States, including New York and California, have considered with the use of age assurance mechanisms like age flags in implementing youth privacy protections.
Artificial Intelligence
Though the recent explosion of generative AI has brought artificial intelligence into mainstream discourse, the technology is quickly being deployed without guardrails to ensure efficacy or safety. AI is now a part of personalized feeds on social media, embedded in search engines, used to support personalized learning, impact consequential decisions around education funding, and power addictive and sycophantic AI companions.AI technology is also enabling harassment in schools through the use of AI-generated deepfakes.
Federal lawmakers came close but failed to pass deepfakes legislation in 2024. State governments have the opportunity to lead the safe development of AI products in their legislative priorities for 2025. Legislators must act to ensure that AI’s development and deployment maximize benefits while mitigating risks related to privacy, transparency, and bias. Responsible AI technologies embed efficacy and safety from the start – minimizing discrimination in training data, ensuring users understand how the technology works, protecting user data, and fostering inclusivity.
While many states have introduced, considered, and passed AI-related legislation, the particular harms posed to kids have not been sufficiently addressed. States cannot cannot continue to let AI use vulnerable teens and children as beta testers. This is especially critical in light of a recent lawsuit concerning the death of a child after an AI chatbot encouraged him to commit suicide. Absent federal action, state governments can pass laws to ensure that children can interact with new forms of technology safely while also working to protect children’s data from inappropriate usage in training AI models.
Digital and AI Literacy
Given the importance of technology in the lives of children and the high likelihood of young users becoming early adopters, it is important for states to support initiatives that teach children how to use technology safely and responsibly. Our research on AI has found a significant gap in understanding between students, educators, parents, and other caregivers. As students and families seek to benefit from advances in technology, digital literacy remains a key component of ensuring its safe and effective use. For example, digital literacy equips children with the tools they need to critically assess the outputs generated by AI systems, understand the implications of sharing personal information with AI-powered platforms, and recognize and avoid potential harms, such as algorithmic manipulation, misinformation, or biased decision-making.
Further, the rapid advancement of AI risks exacerbating the digital divide, widening the gap between those with the resources to access and experiment with the latest technologies and those who lack reliable internet, high-quality devices, or the necessary skills. Without intentional efforts, families in underserved communities may face further marginalization, missing out on opportunities to benefit from AI while bearing its unintended consequences. To this end, states must prioritize equitable access to reliable, high-speed internet and high-quality devices, ensuring that no family is left behind in the digital age.
In addition to supporting children, AI digital literacy initiatives should target parents, educators, and caregivers. Empowering adults with the knowledge to support and guide children effectively in navigating AI technologies fosters safer and more informed technology use at home, in schools, and within their lives more generally. Adults with digital literacy can demonstrate responsible online behavior, recognize potential risks, and foster critical thinking about the safe and ethical use of technology. Fortunately, existing initiatives such as the Digital Equity Act are poised to support digital literacy efforts now, with future rounds of grant awards serving as an opportunity to focus on AI literacy.
This post is part of a series examining US state tech policy issues in the year ahead.