Legalized Helicopter Parenting and the Constricted Identities of Teenagers on Social Media
Dhruv Bhatnagar / Oct 4, 2024(Disclaimer: The opinions expressed in this piece are personal to the author. Any organization (s) or institution(s) that the author may be affiliated with, or may have been affiliated with in the past, do not subscribe to, or take responsibility for, the views expressed here. All errors are attributable solely to the author.)
Since the inception of the internet, lawmakers, journalists, and civil society in the US have raised concerns over children’s ability to access harmful content. The targeted material has expanded over the years, with pornography being the primary focus in the 1990s and social media content receiving close attention in the 2020s. Of late, two legislative responses aimed at regulating minors’ access to social media have gained popularity in the US: (1) age verification mandates and (2) parental consent and access requirements.
At the federal level, Sen. Schatz, Brian (D-HI) introduced the Protecting Kids on Social Media Act (“PKSMA”) in 2023. Broadly, this bill sought to establish 13 as the minimum age for social media use, mandate age verification of all account holders (including adults), and make minors’ access to these platforms conditional upon obtaining parental consent. The bill has been before the Senate Committee on Commerce, Science, and Transportation since April 2023 but has not moved forward.
More recently, in July 2024, the polarizing Kids Online Safety Act (“KOSA”) was incorporated into the Kids Online Safety and Privacy Act, a bill sponsored by Sen. Schumer (D-NY) that was passed with bipartisan support in the US Senate. An amended version of KOSA is also currently being considered by the House of Representatives. This version of KOSA seeks to obligate online platforms to develop parental tools aimed at protecting children and spotting harmful behavior that would allow parents to control their children’s account settings and limit platform usage, among other things.
PKSMA and KOSA include elements of state laws enacted by the legislatures of Arkansas, Utah, and Texas, among others, seeking to protect young users from the addictive algorithms, harmful content, and resultant mental health challenges that social media has come to be typified by. However, despite being well-intentioned, digital rights advocates have fiercely assailed enactments of these laws for burdening users’ First Amendment rights to access even lawful material online, chilling constitutionally protected anonymous speech, and subjecting minors to unwarranted surveillance.
Indeed, for broadly these reasons, federal courts blocked Arkansas’ Social Media Safety Act and Utah’s Minor Protection in Social Media Act in Griffin and Reyes, respectively. However, Texas’s Securing Children Online Through Parental Empowerment Act (“SCOPE Act”), arguably the most rights-suppressive of these statutes, has remained partially operational since September 1, 2024.
Mere hours before the SCOPE Act was to go into effect, a federal court, in CCIA & NetChoice v. Paxton, blocked certain provisions of this statute, requiring social media platforms to moderate and filter content accessible to minors on First Amendment grounds. However, other controversial provisions, including an age verification mandate and parental control requirements, were allowed to pass since the plaintiffs had failed to make a “clear showing” of unconstitutionality.
Shortly after the SCOPE Act became operational, Meta announced that it would commence putting Instagram users under the age of 16 into new ‘teen accounts,’ which default to the strictest privacy settings and offer parents significant control over their children’s on-platform activity, including the ability to monitor whom their child has messaged, view the topics their child has chosen to see content from, and set total daily time limits for Instagram usage. Meta plans to roll out ‘teen accounts’ in the US, UK, Canada, Australia, and the EU this year and in other parts of the world next year. Admittedly, this was probably a prudent move by Meta, considering Texas recently sued TikTok to enforce the parental control provisions and data sharing restrictions under the SCOPE Act.
Using the SCOPE Act as a case study, this essay examines the constitutionality of statutory provisions authorizing parental access and control over minors’ social media activity under US law. It begins by identifying the constitutionally guaranteed rights - emerging from the First Amendment - burdened by such provisions. Next, it assesses whether these provisions would withstand the applicable standard of First Amendment scrutiny. Finally, it suggests an alternative regulatory path to ensuring child safety online.
Impact on minors’ speech
The SCOPE Act obligates ‘digital service providers’ - defined widely enough to encompass most prototypical social media platforms - to create parental tools that allow the verified parent or guardian of a child under 18 to supervise their child’s use of the service. Similar to equivalent provisions in the current iteration of KOSA, these parental tools must, among other things, allow parents or guardians to control their child’s privacy and account settings, restrict the ability to make purchases or engage in financial transactions, and monitor and limit the amount of time their child spends using the service.
I argue that the SCOPE Act’s parental control provisions curtail minors’ speech and associational rights in two significant ways. First, by encumbering their access to large swathes of information and interactive resources available on social media. Second, by curtailing online anonymity for minors.
Right to access information and mediums of self-expression
A fundamental principle of the First Amendment is that all persons have access to places where they can speak and listen, and then, after reflection, speak and listen once more. It is equally well-settled that the First Amendment protects the right to receive and possess information, regardless of where it originates.
The SCOPE Act makes minors’ access to social media conditional upon allowing their parents or guardians to monitor their activity on these platforms. This may be a tough tradeoff for many young people like teens in fundamentalist communities or abusive homes, queer youth whose parents do not support their search for affinity groups, and even adolescents in foster care. Such parental surveillance may impel teenagers to forego social media altogether or, at the very least, cause them to self-censor their opinions, particularly on sensitive or controversial topics like religion, politics, or sexuality - especially if their parents’ views differ.
The US Supreme Court (“SCOTUS”) has previously struck down legislation that seeks to protect children from purportedly dangerous mediums or forms of expression. In Brown, the SCOTUS invalidated a state law prohibiting the sale or rental of violent video games to minors without parental consent for being inadequately tailored. The Court held that where First Amendment rights are at stake, the state must pursue even legitimate aims through “...means that are neither seriously underinclusive nor seriously overinclusive.”
As a method of protecting children from depictions of violence, the Brown Court found the legislation at issue to be seriously underinclusive since it excluded mediums other than video games and because it permitted a parental veto. Relatedly, the legislation was also deemed overinclusive for abridging“...the First Amendment rights of young people whose parents . . . think violent video games are a harmless pastime.” Thus, the Court’s majority rejected Justice Thomas’s dissent “that the state has the power to prevent children from hearing or saying anything without their parent’s prior consent” because “[s]uch laws do not enforce parental authority over children’s speech… they impose governmental authority, subject only to a parental veto.”
The Brown decision is broadly consistent with a prior ruling of the Court of Appeals for the Seventh Circuit in American Amusement Machine Ass’n, where the court struck down a similar parental consent requirement on violent video games. There, the Seventh Circuit opined that minors should be allowed to access and consume uncensored speech because they “are unlikely to become well-functioning, independent-minded adults and responsible citizens if they are raised in an intellectual bubble.” I argue that this rationale applies equally to social media.
Following the precedent of Brown, to the extent that the SCOPE Act’s parental control requirements attempt to protect minors from harmful content on social media, those requirements are underinclusive since this legislation, as a whole, selectively targets only certain digital communications platforms to the exclusion of others. Further, to the extent that these provisions attempt to assist concerned parents, they are seriously overinclusive as they burden the First Amendment rights of teenagers whose parents believe social media is merely a ‘harmless pastime.’
Right to speak anonymously
Anonymous communication is an integral part of America’s social and political discourse, and the SCOTUS has safeguarded the right to speak anonymously in a variety of contexts. In McIntyre, the SCOTUS struck down an Ohio election law that prohibited anonymous pamphleteering, holding that the First Amendment protects the freedom to publish anonymously, and this autonomy “extends beyond the literary realm to the advocacy of political causes.” The Court further held that when a law burdens anonymous political speech, the presiding courts must apply “exacting scrutiny,” upholding the restriction only if it is narrowly tailored to serve an overriding state interest.
Subsequently, federal courts in Am. Booksellers Found., Mukasey, and Johnson have invalidated age verification mandates for access to online material for threatening anonymity on the internet, among other reasons.
The SCOPE Act’s parental control provisions restrict online anonymity on social media for young users by subjecting their activity on these platforms to continuous and intrusive monitoring by their parents. Such monitoring could be developmentally harmful to vulnerable youth who may need safe, anonymous spaces to explore their identity. Empirical studies on how LGBTQ adolescents use social media suggest that allowing gender-diverse users to change their nicknames to suit their identity could aid in gradual identity disclosure. Further, having multiple social media accounts permits LGBTQ teens to express and explore identities within specific audiences anonymously.
Additionally, limiting online anonymity may also stifle the speech of whistleblowers, victims of sexual assault, and political dissidents, authors, and artists, who rely on the anonymity afforded by social media to voice unpopular opinions.
First Amendment scrutiny
The preceding analysis establishes that minors’ speech and associational interests are implicated by the SCOPE Act’s parental control provisions. Therefore, to be constitutional, these provisions must withstand First Amendment scrutiny. I submit that despite arguably serving a legitimate governmental aim, these provisions fail to satisfy even intermediate scrutiny, let alone the more demanding strict scrutiny standard, since they are not narrowly tailored.
Standard of review
The applicable standard of First Amendment scrutiny generally turns on whether the at-issue speech regulation is content-based or content-neutral. The courts generally agree that content-based regulations usually trigger strict scrutiny, while content-neutral regulations typically attract the more deferential intermediate scrutiny. Admittedly, the SCOPE Act’s parental control provisions do not impose a textually apparent restriction on any content, viewpoint, or speaker. However, as correctly noted by the District Court, the content and speaker-based discrimination effectuated by the SCOPE Act, as a whole, is exposed upon examining this statute’s central coverage definition.
To elaborate, the SCOPE Act governs platforms that host or broadcast “social” speech but exempts those that facilitate non-social interactions, like professional interactions, or primarily provide users with access to news, sports, or commerce. Thus, the SCOPE Act regulates platforms based on the content of their speech and the identity of the speaker. It is well settled that laws favoring some speakers over others for their content demand strict scrutiny. As such, the District Court was right to hold that strict scrutiny applies to the SCOPE Act in its entirety.
Even individually, the SCOPE Act’s parental control requirements are content-based regulations that must be subjected to strict scrutiny, for the same reason the anti-pamphleteering law in McIntyre was found to be - for restricting anonymous speech.
Tailoring analysis
While the SCOPE Act’s parental control requirements are likely content-based restrictions that should be tested under the strict scrutiny standard, definitively making this assessment is unnecessary since these provisions would fail even intermediate scrutiny. To pass intermediate scrutiny, a law must be narrowly tailored to serve a significant governmental interest. I concede that the governmental interest that the SCOPE Act seeks to further - protecting minors from harmful content and practices on social media - is significant. However, the SCOPE Act’s parental control provisions still fail intermediate scrutiny for being inadequately tailored.
Firstly, the parental control provisions sweep too broad by encumbering minors’ access to social media platforms altogether and, by extension, to all content circulated on them, including constitutionally protected material that children have a right to access freely. Secondly, the parental access requirement is overbroad for curtailing anonymous speech by minors on social media.
An alternative regulatory approach
Instead of restricting access to online material and interactive resources, enhancing media and information literacy (MIL) is a less paternalistic and more rights-respecting path to attaining the goal of ensuring child safety on social media. MIL focuses on empowerment and capacity building, equipping individuals with the skills to critically assess news, information, and other forms of content and enabling them to participate in the public exchange of ideas responsibly. A recent resolution on ‘freedom of opinion and expression’ adopted by the UN Human Rights Council on July 8, 2022, underscored the importance of MIL in countering disinformation, manipulated media, and hate speech - an opinion shared by educators, human rights activists, and even Big Tech.
A small but growing number of states in the US have enacted legislation incorporating MIL instruction into K-12 education. Recently, California, which has the largest population of K-12 students in the country, passed a media literacy statute (AB-873) integrating MIL instruction into the K-12 curriculum. AB-873’s author, Marc Berman, articulated the statute’s core objective as “...teach[ing] the next generation to be more critical consumers of online content and more guarded against misinformation, propaganda, and conspiracy theories”.
Florida, too, has enacted a similar statute (HB 379), which, among other things, requires the state’s Department of Education to develop a social media safety curriculum for K-12 students, mandatorily covering (1) the risks of social media on mental health and (2) responsible and safe social media use. These legislations can serve as blueprints for other states and even foreign jurisdictions looking to design their own MIL standards.
By no means am I suggesting that MIL alone can solve the complex content moderation challenges presented by social media’s widespread proliferation and increased geo-political significance. However, investing in MIL is an arguably better regulatory strategy compared to heavy-handed parental control provisions since this would help children safely navigate the internet, become responsible digital citizens, and fortify rather than undermine the parent-child relationship.
The SCOPE Act’s parental control provisions encumber minors’ access to lawful material and interactive resources on social media, which they have a right to access freely. Further, these provisions allow parents to continuously monitor children’s social media activity, which is a gross invasion of privacy and prohibits children from speaking anonymously, excessively burdening yet another First Amendment-protected interest. Consequently, despite being well-intentioned, these provisions are likely unconstitutional and should have been blocked by the District Court. Governmental resources may be better utilized to help children develop information literacy skills needed to critically engage with online content.
Related Reading: