The California Age Appropriate Design Code Act May Be the Most Important Piece of Tech Legislation You’ve Never Heard Of
Jesús Alvarado, Dean Jackson / Jul 9, 2024Jesús Alvarado and Dean Jackson are fellows at Tech Policy Press.
In 2021, former Facebook product manager Frances Haugen blew the whistle. She provided thousands of internal company documents to journalists leading to “the Facebook files,” a series of investigative reports first published in the Wall Street Journal that aired years of the company’s dirty laundry. Piece by piece, the documents provided evidence of how choices about the platform’s design contributed to growing political polarization, declining teen mental health, spreading vaccine skepticism, and other social problems. These revelations contributed to a quickly growing debate about social media and children’s health, safety, and overall well-being.
Fast forward to today, and state legislatures have enthusiastically taken up this issue. A 2023 report from the University of North Carolina at Chapel Hill found that 13 states had passed a total of 23 laws regarding children’s online safety. More have passed since, and more are in the works: in May of last year, writing for Tech Policy Press, Tim Bernard identified 144 bills focused on child online safety proposed across state legislatures. Some are blunt: in March 2024, for instance, Florida Governor Ron DeSantis signed a bill prohibiting children younger than 14 years old from accessing social media. (The law will almost certainly be challenged in Court.)
Another state law, however, the California Age Appropriate Design Code Act (CAADCA), took a subtler approach. Modeled after the United Kingdom’s Age Appropriate Design Code (UK AADC), it requires internet companies to assess how features on its platform(s) might present risks for young users and to institute, by default, the strongest possible protective settings for those users. In September 2023, though, it was struck down in a District Court and is now on its way to a hearing in the US Ninth Circuit Court of Appeals that is set for July 17.
The legal battle over the CAADCA has implications beyond kids’ online safety. It raises important questions about free expression: when is software considered speech? When can corporations be held responsible for its consequences? And what are the constitutional limits of social media regulation in the US? As policymakers look to design codes as a potential solution not just for child safety but a range of other online harms, the outcome of the lawsuit concerning the CAADCA may decide the answers to these questions.
What is a design code, and how would it protect kids?
AB-2273, the CAADCA, was passed unanimously by the California legislature and signed by California Governor Gavin Newsom on Sept. 15, 2022. As a design code, the law attempts not to regulate specific types of content but the features and mechanisms used by platforms to display and distribute user-created content and advertisements. This is an important distinction from a First Amendment perspective: it recognizes the political and legal limits to direct state regulation of speech.
The law impacts online services, products, or features “likely to be accessed by children.” As specified by the CAADCA, this includes those with ads directed toward children, those commonly used by large numbers of children, and those with “design elements that are known to be of interest to children, including, but not limited to, games, cartoons, music, and celebrities who appeal to children.” It does not include common carriers such as internet service providers.
Under the law, these businesses must carry out biennial “Data Protection Impact Assessments'' (DPIAs) for features likely to be used by children. DPIAs must “identify the purpose of the online service, product, or feature, how it uses children’s personal information, and the risks of material detriment to children that arise from the data management practices of the business.” This includes exposure to potentially harmful content as well as “harmful contacts.” It also implicates content distribution algorithms, targeted advertising, data collection, behavioral nudges, and other common aspects of online life.
Despite using the word “harm” (or a variant of it) 11 times throughout its text, the law never defines specific harms. Based on California's arguments in Court, this is presumably because the law mostly requires services to abide by their own terms of service and is not prescriptive about what harm entails. However, the power to enforce the law through fines and civil penalties rests with the California Attorney General’s office. As such, the state’s interpretation of harm could carry great weight.
Finally, the law includes other obligations and restrictions on online services. They are required to “estimate the age of child users with a reasonable level of certainty” and set their default privacy settings to a high level. They are also prohibited from using children’s personal information for profiling or other commercial purposes beyond the immediate operation of their service, with some exceptions.
A British import to the Golden State
Although both California’s and the UK’s age-appropriate design codes are similar, given the Golden State adopted its literature from the UK’s, there’s some differences to each. California’s would be an enforceable law whereas the UK’s is more of a statutory code.
The UK’s design code, though, is very specific and prescriptive about what big tech companies are being asked to do for the safety and well being of its young consumers. As the 5Rights Foundation’s UK team puts it, these are the 15 core principles any tech firm – whether based in the UK or simply serving UK consumers – should follow. In short, they say that platforms:
- Should consider the best interest of the child from the onset of building of a feature or tool;
- Perform data protection impact assessments and “mitigate risks to the rights, freedoms of children who are likely to access your service”;
- Have age appropriate applications in place, where specific age groups of children’s personal data is given appropriate protections;
- Provide transparent terms of service or community standards with language minors can understand, especially when it pertains to how their personal data is collected and used;
- Avoid the detrimental use of children’s data, especially in dark patterns;
- Uphold their own policies and community standards and enforce them, just as users are expected to follow them;
- Set all default settings to “high privacy,” unless there is a compelling reason to do otherwise;
- Minimize data collection on sites/platforms children are knowingly visiting;
- Share data from underage users internally or externally;
- Set geolocation tracking to “off” unless an explicit message makes it known to the child that it is enabled and why, then turn it back off after its use;
- Give the child an indication they’re being monitored when parental controls are activated by parent or caregiver;
- Set all digital tools used for profiling users to predict the content on their social media to “off” for underage users;
- Stop the use of nudge techniques, for example making the “yes” button more prominent than the “no” button when asking if minors want to view a site;
- Apply these principles also to connected toys and devices, or the “internet of things” as some internet-connected products are often called;
- Always provide online tools where children can access their own personal data and even request its deletion.
California’s Age Appropriate Design Code Act overlaps with some of those design principles. In summary, the CAADCA would have tech companies: set privacy default settings to the highest level, conduct DPIAs, provide age-appropriate privacy information, let minors know when they’re being monitored, and minimize targeted advertising for minors. With the latter two points, there is that explicit overlap where both the UK and California are directly attempting to tackle the issue of dark patterns online.
While the UK code has been in place for close to three years, California’s law was enjoined by the courts shortly after it took effect. And that’s because Big Tech’s lobbying arm, NetChoice, sued the state last year. The lobbying group argues the law would infringe on social media users’ First Amendment rights – the right to free speech, free expression, and freedom to receive information. In an interview with Marketplace Tech’s Lily Jamali in March, NetChoice General Counsel Carl Szabo argued that the CAADCA is an “unconstitutional attempt to control speech online.”
But across the pond, in the UK, the age-appropriate design code hasn’t proven to infringe on internet users’ online speech nor how people there use the internet. In late March, Children and Screens, a non-profit organization whose goal is to educate young people on how to have healthy lives in a digital age, published a report analyzing how the UK AADC has impacted people’s internet use there. In short, the 36-page report highlights that since 2019, when the UK AADC went into effect, there have been some 90 internet platforms that have complied with its prescriptions on youth safety and wellbeing, privacy, and security, as well as the positive effects of its provisions intended to help limit children’s screen time.
To add context to some of that, a representative of 5Rights Foundation’s UK team said in an email that “some of these changes include TikTok implementing a curfew on push notifications for users 13-15 (after 9 pm) and for users 16-17 (after 10 pm).” They added that platforms like YouTube, TikTok and Instagram have shown to, by default, “[give] users (under 18, and under 16 respectively) the most private settings – including restrictions on direct messaging and content uploading, and YouTube Kids turning off autoplay by default.”
And though the UK AADC is more detailed and prescriptive than the CAADCA, it is vague about the use of personal data in one space crucial to children: educational tech. More clarity is needed about how the industry’s use of biometric data and AI, for instance, should comply with the UK AADC’s 15 standards. 5Rights Foundation says, “clarity on key data protection principles including the purpose limitation and the lawful basis for ‘public task’ processing (e.g. processing of data considered to be in the public interest) still needs to be addressed.”
Understanding the legal fight over the CAADCA
As for the legal challenge to the CAADCA, NetChoice v. Bonta, the District Court decision makes several arguments that boil down into two essential concerns. The first involves the speech rights of social media companies. The second involves the speech rights of their users.
The CAADCA is designed to be content neutral: lawyers for California argued, for example, that the required DPIAs regulate platform “features,” not speech. But the District Court held that these risk assessments are a form of compelled speech, which “require businesses to affirmatively provide information to users, and by requiring speech necessarily regulate it.”
To evaluate this argument, it is important to first acknowledge that the First Amendment does not totally prohibit the regulation of speech. Restrictions on speech must instead meet certain judicial standards. Laws that regulate speech based on content face the highest level of “strict” scrutiny; non-content-based restrictions and regulations on commercial speech face a lower level of “intermediate” scrutiny. The District Court ruled that the CAADCA failed to meet even this lower standard, which requires the law to address “an important or substantial government interest” via restrictions “no greater than is essential to the furtherance of that interest.”
One of the several arguments the Court lays out for this finding is that the CAADCA does not actually require companies to assess the harm of product designs, but rather their “data management practices,” and doesn’t require companies to take actual mitigation measures in any case. Ergo, the judge ruled that it does not address the stated government interest.
This seems like a misreading of the law: while the “Data Protection” in DPIA might suggest it is limited to data management issues, the text of the law is clear that it covers any features likely to be used by children and provides attendant definitions. While the law does not prescribe mitigation measures, it provides penalties for companies that specify risks in DPIAs and fail to hedge against them. Further, according to Meetali Jain from the Tech Justice Law Project, the judge “basically stated that collecting or data collection is… protected by a company's speech interests. And that just flies in the face of everything we understand about data privacy.”
The ruling doesn’t end there. Assessing provisions of the CAADCA that require platforms to enforce the terms of service those companies write and provide to users, the District Court held that requiring a company to enforce its own existing policies is equivalent to regulating its editorial judgment, which is tantamount to its free expression. The judge makes this argument in strong terms, arguing in support of “NetChoice’s position that the State has no right to enforce obligations that would essentially press private companies into service as government censors, thus violating the First Amendment by proxy.” If upheld, this ruling would mean that digital platforms are not accountable to users for their professed terms of service, and that efforts to require accountability would be the same under law as if the government had written the rules itself. Such a ruling would preempt many, if not all, current legislative approaches.
This would be a bad outcome. Worse might be if the Ninth Circuit Court of Appeals upholds the District Court’s ruling about “dark patterns,” invisible design elements that compel users to take certain actions or spend more time and money on the service; the “infinite scroll” of newsfeeds is a common example. The District Court argues that California has failed to demonstrate specific harms that would be addressed by prohibiting or regulating dark patterns.
But applying intermediate scrutiny – a test reserved for speech restrictions – to the regulation of dark patterns begs an important question: are dark patterns speech at all? Does the computer code behind features, like infinite scroll, convey a message?
The 2021 case of Burns v. Town of Palm Beach is instructive here (and perhaps somewhat amusing). According to the record in the case, Donald Burns, a wealthy Palm Beach resident, hoped to bulldoze his beachfront mansion and replace it with a new one, twice as large. When the City of Palm Beach denied his building permit, Burns sued on the grounds that his right to free expression had been violated.
Ultimately, the Eleventh Circuit Court of Appeals ruled that Burns’s proposed design was unlikely to communicate a message to viewers; in fact, Burns’ plan obscured the house from view with hedges and landscaping. “A viewer,” the Circuit Court ruled, “cannot infer a message from something the viewer cannot view.” Burns’ architectural plans were functional, not expressive, and therefore not protected by the First Amendment. An amicus brief filed by a dozen internet scholars on behalf of California in NetChoice v. Bonta makes a similar argument: features like infinite scroll “target functional design, not expressive activity.”
If the Court of Appeals upholds the earlier ruling in spite of this argument, it could spell trouble for decades of regulatory law. Other industries are required to disclose information about risk to the public; nutrition labels and medical warnings, for example, have not been found to violate the First Amendment (though this doesn’t mean industries accept them happily). Workplace notices of Occupational Safety and Health Administration standards are not considered compelled speech. Courts would not accept a commercial airplane without smoke detectors as a protected form of free expression.
Why, then, should software uniquely implicate the First Amendment? Some elements of platform design, like content distribution and amplification algorithms, might invoke the editorial judgment of a company and therefore be expressive, though experts disagree about this. But to argue that all digital design is categorically expressive seems extreme.
Even if these regulatory rules govern speech, it is unclear if that speech deserves a higher level of protection or the lesser degree typically given to commercial speech. “Some say that any kind of content that's on different platforms… would constitute as commercial speech in that it's all being fueled by algorithms, which effectively generate revenue for these businesses,” Jain said, emphasizing that there are two main schools of thought on this matter. “There's others who say that commercial speech should really be limited to things like advertisements and things that have a very direct transactional value. So exactly what is encompassed within commercial speech, I think, remains like shifting goals.”
Beyond the rights of platforms, the District Court ruling also found that the CAADCA violated the rights of users in a number of ways. The age authentication requirement is the prime example. Some observers—and the District Court—fear that overcautious tech companies might institute some kind of verification regime in which even adults have to provide photo identification to access certain online services. This would almost certainly deter some users, both for privacy reasons and simply because the systems might cause frustrating delays. Previous case law suggests this would likely be an unconstitutional limit on the right to access information.
Other critics worry that estimating a user’s age would require companies to collect more data on children, not less – though the law places limits on how this information can be used and how long it can be retained. But this is where semantics did not side with the Californian law, because it could be interpreted as though tech companies would be required to collect more data on users to decipher which ones are underage.
Jain said that at face value, the CAADCA requires tech companies to collect no data for the purposes of age estimation—not verification—beyond what they do already; Jain calls this “age assurance,” as opposed to traditional age verification methods, which might require users to upload identification to verify their age before entering a site. On this matter, civil society organizations, such as the Electronic Privacy Information Center (EPIC), and even some government officials, have coalesced in support of the CAADCA. If the law did require explicit methods of age verification, such entities would not support the necessary expansion of sensitive data collection.
Another instance where the CAADCA’s semantics did not play in the Golden State’s favor was when the District Court ruling found the law’s lack of defining what material is “harmful” to children. NetChoice argues—the ruling says compellingly—that some platforms and even news sites might simply bar children from accessing their services altogether rather than risk violating the law, even incidentally. This, too, could represent an unconstitutional burden on the right to access information.
Finally, while some content algorithmically targeted at children can be harmful, it can also be beneficial. While the CAADCA makes exceptions for such beneficial “profiling,” a content-neutral adjudication of some questions is probably impossible. The ruling gives the example of information about coping with teenage pregnancy: who decides if this is beneficial or harmful to a minor?
Concerns from LGBTQ+ advocates
Accompanying the questions about “harm” are others about the phrase “best interests of children,” which is found throughout the CAADCA. Like “harm,” the “best interests of children” is not defined, nor is there anything prescriptive in the law that also measures what is objectively the best interest of a child.
Carlos Gutierrez, Deputy Director and General Counsel at LGBT Technology Partnership and Institute, said this is problematic. He noted that while California is a mostly liberal state, politically speaking, and its legislation tends to reflect that, its lawmakers rarely think about how other state legislatures inspired by California laws might implement them in their jurisdictions. Gutierrez is concerned that the CAADCA’s lack of definition for those four words — “best interests of children” — could allow room for other states to do more harm than protecting.
“The same provisions that we're concerned about in California are even more concerning for us in other states,” Gutierrez said in an interview.
“So when we're faced with language like in the [CAADCA], [which] talks about protecting children from harm without any further context, it just leaves it very open to abuse by anyone that would be hostile to LGBTQ+ issues, like gender-affirming care, like drag races, drag shows, any of that,” Gutierrez said, explaining that states that have moved forward to ban things like gender-affirming care for transgender youth and other queer public events, and even banning books that discuss gender and sexuality, would benefit from this type of broad language. He said policymakers in those states, for example, are already making an argument about how the LGBTQ+ community is harmful to children.
And the concern here doesn’t stop at states that would take advantage of the CAADCA’s broad language to move an anti-LGBTQ+ agenda forward. With language like “best interests of children” also comes the assumption that the parent or guardian of a minor would know what’s best for that young user. But when it comes to LGBTQ+ youth, at least, a child’s parent or guardian doesn’t objectively know what is best for their child, Gutierrez said, a concern reflected in statistics about LGBTQ+ youth—who are 120% more likely to experience homelessness, with 20% of LGBTQ+ youth actually experiencing homelessness before the age of 18, according to the National Coalition for the Homeless. Gutierrez attributes these numbers to the fact that not all parents or guardians are accepting of their LGBTQ+ child.
“So to kind of say, ‘we're just going to give parents the ability to control everything about [LGBTQ+ youth’s] digital life if they're under 18 years old,’ does a disservice to marginalized communities, because we (LGBTQ+ people) need independence,” said Gutierrez, emphasizing that these broad provisions in the CAADCA can have an impact on other historically marginalized communities.
Who is responsible for kids’ safety online?
Despite Gutierrez’s concerns, NetChoice General Counsel Carl Szabo has argued that “it’s a parent’s responsibility to decide what is best for their kids and their family.” In a statement, NetChoice has characterized the CAADCA as “replacing parental oversight with government control.” This position is consistent with a broader industry trend toward shifting control over—and responsibility for—trust and safety issues to users. Tech executives have come to see trust and safety as a costly, never-ending, no-win struggle. They would rather opt-out.
In August 2023, for instance, the Washington Post reported that Meta had “quietly begun offering users new controls to opt out of the fact-checking program.” This past March, the company announced that Threads, its Instagram-powered X (previously Twitter) lookalike, would be compatible with the fediverse, giving users “more freedom and choice in the online communities they inhabit.” It seemed like an endorsement of Twitter founder Jack Dorsey’s vision for decentralized social media, embodied by a new application, BlueSky, the Board of which Dorsey recently left. BlueSky itself followed the earlier Mastodon, which briefly became popular with self-exiled users looking to escape reliance on Twitter after Elon Musk purchased that service.
At first blush, these moves seem unobjectionable or even admirable. And a broader embrace of decentralized social media could have real benefits. The downside is that it allows platforms to ignore and outsource responsibility for real harms while continuing to profit from them. When the New York Times reported on the accounts of children styled as influencers by their parents, it warned that “the accounts draw men sexually attracted to children,” who “sometimes pay to see more.”
In a statement to the Times, Meta spokesman Andy Stone noted “anyone on Instagram can control who is able to tag, mention or message them, as well as who can comment on their account ... On top of that, we prevent accounts exhibiting potentially suspicious behavior from using our monetization tools, and we plan to limit such accounts from accessing subscription content.” Platforms, however, are aware that a majority of users never change the default settings on social media apps. Providing users with various options for controlling their online experience is not a scalable solution to online harms. It is, however, excellent cover for the platforms themselves.
Lessons for other state legislatures
Other state legislatures seem unconvinced that 100% of the responsibility for childrens’ online safety falls on the shoulders of parents. As Marisa Shea from the 5Rights Foundation said in an interview, state legislators believe “it's still their responsibility to protect the residents of their state, and they don't necessarily want to wait for whatever the Ninth Circuit's going to decide… So instead of waiting for something to work its way through the courts, they want to take action now.” According to Jain from the Tech Justice Law Project, these states are watching and learning from the legal fight over the CAADCA. “You start to see a little bit more clarification of some of the things that I think were confusing and particularly at issue in the lawsuit,” she explained.
Vermont and Maryland are each case in point. In both states, legislatures have passed their own design code acts. Maryland’s bill became law with the signature of Governor Wes Moore, and Meta says it will not challenge the law in court. In Vermont, however, Governor Phil Scott vetoed H.121 this past June, and the State Senate failed to overturn that veto. Were it to become law, the current version of the bill would be one of the most comprehensive pieces of state legislation on both data privacy and children’s online safety.
Some of the language in both is similar or even identical to the CAADCA, but there are many key differences. Neither mentions harmful content, for instance; in fact, both explicitly do not require blocking access to third-party content. Both respond to criticisms of the CAADCA’s age assurance regime; the Maryland law prohibits platforms from processing additional data to estimate users’ ages, while the Vermont bill specifically forbids age-gating. The two pieces of legislation are more specific in their descriptions of “dark patterns” and the types of protections platforms must afford to underage users. While Maryland’s legislation includes a requirement that platforms conduct DPIAs, its specificity preempts arguments that its remedies are not tailored to the government’s interest in protecting children. The Vermont bill does not include DPIAs at all, and instead simply prescribes specific protections for underage users.
Vermont’s law also differs from both California and Maryland’s laws in at least one other important aspect: rather than requiring platforms to act in the “best interests” of children, it places a “minimum duty of care” on service providers. This is more than a semantic difference: duty of care is a recognized legal term requiring the use of reasonable judgment to avoid harming someone through negligence. Rather than requiring platforms to adhere to some standard of what is “best” for children, Vermont has chosen to impose a penalty with the aim to prohibit design features that provide benefits to platforms at the detriment of their underage users.
Ultimately, if the CAADCA stands, or is more narrowly curtailed, more states might pursue broad-based legislation protecting a range of children’s rights. If the lower Court’s decision stands, states may resort to adopting narrower bills addressing specific harms and design features in a piecemeal fashion.
Design codes are about more than child safety
If these laws do stand, Jain says they might open pathways to other social media regulations capable of withstanding scrutiny under both the First Amendment and Section 230 of the Communications Decency Act. Children’s online safety is a shared bipartisan concern, but in this way, it is also a test case for concerns about other societal consequences of social media design.
Ravi Iyer at the University of Southern California’s Neely Center understands these consequences firsthand. He was a member of Facebook’s civic integrity team during the 2020 election, where he helped oversee the company’s “break glass” measures. As detailed elsewhere, many of those measures relied on changes to how certain features — group invitations, recommendation algorithms, and other mechanisms — worked.
Iyer has become an advocate for design codes, likening them to building codes: specific minimum standards and best practices that help ensure structures don’t collapse or burn down. “‘Code’ implies laws or standards,” said Iyer in an interview. “It ought to be specific.”
Through the Neely Center, Iyer worked with stakeholders to propose his own design code reflecting consensus best practices across the industry. He says the advantage of design codes is that they can address harms without addressing content; in the introduction to his code, he writes that “one of society’s biggest debates is whether platforms should moderate content more, or moderate content less. We have argued for moving beyond content moderation and instead addressing upstream design.”
Iyer gave an example of “upstream” design and how it can respond to challenges like hate speech. “If you attack hate speech by identifying it and demoting it,” he explained, “you will miss out on, for example, ‘fear speech,’ which still creates a lot of hate. It can be as or more harmful than hate speech. You can’t define all the ways people hate or mislead one another… so you want to discourage all the harmful content, not just what you can identify. This affects the whole ecosystem: publishers see what is and isn’t rewarding.”
This approach can be applied to issues beyond children’s online safety, hate speech, and elections. Consider, for example, longstanding debates about news quality online: The Minnesota Prohibiting Social Media Manipulation Act would require platforms to factor in users’ assessments of content’s quality into ranking algorithms and to transparently share indicators of that quality. It would also introduce transparency requirements around user engagement, the rate at which new accounts are created and at which accounts use certain features, and other metrics.
The fate of the CAADCA will determine whether or not approaches like these are possible in the United States. The California law has flaws. It may be overly broad and vague; it can be misused by ideologies to the detriment of vulnerable youth; it raises concerns over the future of online privacy. As a result, it may succumb to legal challenges. But if the judiciary acts too decisively, the CAADCA might, like an ill-placed domino, take down other, better-considered proposals with it. For this reason, it may be the most important piece of tech legislation most people aren’t talking about.