Home

Donate
Analysis

Free Speech Standards and Social Media Age Restrictions in Australia and the US

Kathleen Beirne, Stephen Robert Watson / Dec 10, 2025

Michelle Rowland, then Minister for Communications, and Prime Minister Anthony Albanese discuss a bill to establish a minimum age for social media in Australia on Nov. 8, 2024. (Instagram)

On Wednesday, Australia’s "world-first" social media minimum age provisions took effect, requiring designated platforms to take “reasonable steps” to prevent children under 16 from holding accounts. According to the Australian eSafety Commissioner, the platforms currently affected include Facebook, Instagram, Kick, Reddit, Snapchat, Threads, TikTok, X and YouTube. In parallel, the Commissioner has registered industry codes requiring a broader range of online service providers, including search engines, to implement steps, including in some cases appropriate “age assurance” measures to prevent children’s accounts from accessing certain adult and harmful content.

Several US states have passed child e-safety laws. Although generally narrower in scope than Australia’s national scheme, many of those laws failed constitutional scrutiny because they restricted adults’ speech. However, in July 2025, the US Supreme Court upheld a Texas law requiring adult content providers to verify users’ ages, finding that it only incidentally burdened the speech of adults.

Australia, by contrast, does not have a constitutional “right” to free speech. (It is the only Western democracy without a federal bill of rights, although some states, such as Victoria, have human rights laws.) Instead, Australian courts recognize an implied constitutional “freedom of political communication,” the scope of which is narrower than US free speech protections. The implied freedom operates as a limit on parliamentary power rather than an individual “right.” Nevertheless, in late November 2025, the Digital Freedom Project reportedly filed a legal challenge to the minimum age provisions in the Australian High Court, on implied-freedom grounds.

This article examines Australia’s new measures alongside the implied constitutional freedom and compares them to free speech challenges to similar laws in the US states. Protecting children online is a serious and important issue, but the underlying merits of the policies themselves are beyond this article’s scope. Our aim is to situate Australia’s approach within a broader legal and comparative context, particularly as more jurisdictions consider age restrictions for social media.

A national push toward age limits on social media

In May 2024, the South Australian Premier proposed to explore a “ban” on children under 14 holding social media accounts, and appointed a former High Court judge to examine how such a law could be made in the State. In September 2024, Chief Justice French’s report for South Australia was released. Subsequently, the Australian Prime Minister announced the federal government would implement a national minimum age for social media, to be legislated before the end of 2024. A federal bill was introduced on November 21 and passed the following week.

The minimum age provisions are often described as a “ban” on children accessing social media. However, they impose duties on “age-restricted” social media platforms to take “reasonable steps” to prevent people under 16 in Australia from holding accounts, though “reasonable steps” are not defined.

The Explanatory Memorandum to the Bill (EM)(a parliamentary document which explains the legislative purpose) states that “it is expected that at a minimum, the [reasonable steps] obligation will require platforms to implement some form of age assurance, as a means of identifying whether a prospective or existing account holder is an Australian child under the age of 16 years.” A second, supplementary memorandum explains that alternative possible reasonable methods “…may include user interaction or facial age estimations.” Platforms may collect government ID for age assurance if reasonable alternatives are offered, and they face penalties of up to AU$49.5m for breaching the substantive provisions. The first EM further clarifies that “there are no penalties for age-restricted users who … gain access to [a platform], or for their parents or carers.”

The EM also foreshadowed an independent “age assurance technology trial” to be funded by the Australian government. The final trial report, published in August 2025, found that age assurance is technically feasible, but not without problems. Some age estimation tools variously produced higher error rates or showed “variability in … output” for people of color, certain age groups, and gender presentations. The report also noted “[s]ome providers were found to be building tools to enable regulators [and] law enforcement … to retrace the actions taken by individuals to verify their age, which could lead to increased risk of privacy breaches.” Survey data from the trial also included comments raising privacy concerns, such as “the child did not want to have the photo taken and worried about the photo usage and might be viewed by the child's friends.”

Separately, the eSafety Commissioner has registered nine industry codes, which become binding on various online service providers. These are the latest in a series of codes registered under the Online Safety Act. One of the new codes requires search engines to implement “appropriate age assurance measures” where technically feasible and reasonably practicable to prevent children’s accounts in Australia from accessing certain adult and harmful content. The definition of “appropriate … measures” includes examples such as matching photo ID, using facial age estimation, and deploying AI-based age estimation. These measures, which have received less attention than the minimum age requirement for social media, will come into effect in the coming months.

The First Amendment vs. implied freedom of political communication

The First Amendment in the US provides that “Congress shall make no law … abridging the freedom of speech.” Courts have interpreted this to mean that any law that restricts speech based on its content or viewpoint must satisfy “strict scrutiny.” As a consequence, the government must prove a) that the restriction serves a “compelling” state interest, and b) that it is narrowly tailored and the least restrictive means available.

Several US states have enacted child e-safety laws, but many have struggled to survive constitutional review because their restrictions on minors’ access to content also burden adults’ speech. California’s Age-Appropriate Design Code, for example, required providers to consider children’s privacy and protection when a child might access their services. A federal district court has twice blocked enforcement of the Code on grounds that the regulation is “content-based” and thus subject to, and likely to fail, “strict scrutiny.” An appeal of the second preliminary injunction is pending before the Ninth Circuit.

Other state laws have mirrored, in some respects, the forthcoming Australian measures. Arkansas’s Social Media Safety Act (Act 689) required all users to verify their age and imposed a parental-consent requirement for minors creating accounts. A federal district court blocked enforcement of the Act, and subsequently declared the law unconstitutional. After the decision, the Arkansas legislature amended its bill, though it still retained age-verification provisions. Since then, another challenge has reportedly been filed.

One law that has survived is Texas’s HB 1181, which requires websites containing more than one-third “harmful to minors” content to verify the age of all users. Although a district court issued a preliminary injunction, a divided Fifth Circuit largely vacated the injunction as it applied to age verification. On appeal, the Supreme Court, by a 6-3 majority, upheld the age verification requirement, finding that the appropriate standard for review was “intermediate scrutiny,” which merely requires the government to show that the restriction furthers an important government interest and that the means adopted are substantially related to that interest. Therefore, the outcome of many state age verification laws may hinge on which standard of review the court applies.

In comparison, Australia has no express free speech right, but the High Court has recognized an implied constitutional freedom of “political communication” (ie, relating to government and political matters). Like the US First Amendment, this freedom operates as a limit on parliamentary power: laws that “impermissibly burden” the freedom will be invalid. As set out by the High Court in Lange and clarified in McCloy, the test asks whether a law (1) effectively burdens the implied freedom, (2) has a legitimate purpose compatible with Australia’s system of government, and if so, (3) is “reasonably appropriate and adapted” to achieve that purpose.

Could Australia’s new social media law burden the implied freedom? In 2025, more Australians get their news from social media than from traditional media, and young Australians report that it is the primary source through which they develop political views. Prior to Parliament passing the age-minimum bill, Australian constitutional law Professor Anne Twomey and human rights law Professor Sarah Joseph each opined that the provisions could effectively burden the implied freedom by excluding children under 16 from participating in political communication on social media.

Although young people under 16 cannot vote, the freedom extends to non-voters. Professor Twomey later noted that although under-16 users could still view content on platforms available without accounts, they would be precluded from creating, commenting on, or uploading content. The Digital Freedom Project challenge reportedly makes a similar claim, contending that the law will restrict essential avenues of political participation for 13- to 15-year-olds.

As Twomey and others have observed, the protection of children from “harm” on social media is likely a legitimate purpose. The EM, however, describes mitigating specific harms “that arise from addictive features … associated with the‘logged in’ state of social media platforms, such as algorithms tailoring content, infinite scroll, persistent notifications and alerts, and ‘likes’ to activate positive feedback neural activity.”

The key question is whether requiring platforms to take “reasonable steps” to prevent under-16 account holders is appropriate and adapted to the asserted purpose. How the Australian government characterizes the purpose may matter. In its legal challenge, the Digital Freedom Project reportedly answers this question in the negative, arguing that less restrictive options are available.

Much will also depend on how the law is implemented and how “reasonable steps” is interpreted. This will determine the extent of the “burden,” and thus how difficult it is to justify. For instance, a measure that required all Australians to verify their age would likely have a larger impact on the implied freedom. The effectiveness of accepted “reasonable steps” may also be relevant as to whether the law achieves its stated purpose.

Indeed, it was reported that research commissioned by the Australian government indicated one in three parents would assist their children to get around the measures. Relevantly, the eSafety Commissioner has issued guidelines stating that the reasonable steps taken must be proportionate to the platform’s “risk profile,” and impact on end-users. For instance, “requiringall existing Australian account holders to prove their age may be unreasonable.” While certain platforms have shared more information on the steps they will take to comply with the provisions (and some reportedly already started removing accounts prior to December 10), what constitutes “reasonable steps” in practice will become clearer now that the law has gone into effect.

Broader implications for online speech and governance

Despite the clear differences between the First Amendment in the US and Australia’s implied freedom of political communication, the two jurisdictions have much in common. Multiple High Court decisions have cited First Amendment jurisprudence approvingly (see, for example, Australian Capital Television, Nationwide News, and by distinction, Monis).

Whatever approach the Australian courts ultimately take, the new measures will affect how young people receive and engage with online political content. Adults, too, may also face increased requirements to prove their age before accessing certain platforms. As such, developments in the Australian e-safety landscape will be keenly observed by many around the world in the coming months.

The authors wish to thank Sasha Dawson, Jack McLean and Ingrid Weinberg for their review of previous versions of this article.

Disclosure: Kathleen Beirne has acted for consumer and developer class applicants in Australian competition claims against large digital platforms. She is currently on a leave of absence while studying; her contributions to this article are made in her personal capacity.

Authors

Kathleen Beirne
Kathleen is an ICT law LLM candidate at the University of Oslo, studying data protection, European competition law, and internet governance. She is admitted as a lawyer in Australia (Victoria), and has a background in Australian competition, privacy and public law. Kathleen has worked as a litigator...
Stephen Robert Watson
Stephen is a PhD candidate researching the intersection of legal and political philosophy, religious freedoms and discrimination law as a Harding Distinguished Postgraduate Scholar at the University of Cambridge. He is admitted as a lawyer in Australia (New South Wales) and New Zealand and formerly ...

Related

Analysis
Age Verification Is Locking Trans People Out of the InternetDecember 8, 2025
News
Australian Regulator Girds for Fight Over Social Media Ban for KidsOctober 27, 2025
Perspective
US State Age Verification Efforts Threaten Online Speech and Privacy – The Supreme Court Seems Ready to Allow ThemOctober 1, 2025

Topics