Early Lessons from Australia's Teen Social Media Ban for the Rest of the World
Ramsha Jahangir, Mark Scott / Apr 1, 2026
A smartphone displays a folder of social media applications, including TikTok, Instagram, Facebook, X, Snapchat, and YouTube. Australia's eSafety Commission has warned social media platforms over their failure to enforce the country's ban on under-16 users. (Photo by Samuel Boivin/NurPhoto via AP)
It’s been three months since Australia’s social media ban for teens under the age of 16 took effect on December 10. The country’s eSafety Commissioner just published its first take on how things are going.
The findings suggest the rollout of Australia’s Social Media Minimum Age law (SMMA) has been challenging, with platforms falling short of their obligations while key indicators of harm remain unchanged.
The country’s regulator said it had raised “significant concerns” about five major platforms — Facebook, Instagram, Snapchat, TikTok, and YouTube — and has started formal investigations ahead of potential enforcement actions by mid-2026.
“Many children aged under 16 still have their accounts or can create new accounts,” reads the report. “eSafety has observed poor practices by some platforms in the first three months of the SMMA obligation coming into effect.”
Headline figures indicate activity at scale. More than 4.7 million accounts assessed to belong to under-16s have been removed, deactivated, or restricted as of mid-January. But account removals do not equate to full compliance. Underage use appears to have declined but remains significant. The report’s detailed findings provide a clearer view of the gaps.
One issue is how age assurance systems have been implemented. The report describes cases where users who had already declared themselves under 16 were prompted to complete additional checks to “correct” their age. In practice, these additional checks allowed some users to regain access. Tools such as facial age estimation were used to check users’ ages, despite known limitations in accuracy near the 16-year threshold. System design choices almost certainly contributed to the risk of misclassification.
A second issue is the pace of rollout.
Many under-16 users retained accounts because they had not yet been asked to verify their age. The delayed deployment of age checks to existing users limited the reach of the new social media ban. Where platforms introduced stronger measures to keep minors off their platforms, these steps often followed direct engagement from the regulator, suggesting that compliance efforts were largely reactive.
Early indicators also point to limited impact on user harm. The regulator reported no clear decline in complaints from under-16 users related to cyberbullying or image-based abuse since the law took effect, although the three-month check-in was a relatively small time period to deduce long-term impact. Account removals have not yet corresponded with measurable reductions in reported harm.
The blanket ban also does not appear to be keeping substantial numbers of kids off social media.
In a survey of parents who reported that their children had some form of social media account before December 10, eSafety found nearly 70 percent still said their kids had either a Facebook, Instagram, Snapchat, or TikTok account. Close to half of parents surveyed said their children still had a YouTube account.
Lessons for other regulators
It is still too early to draw conclusions about the impact of Australia’s law. But as other countries race toward similar social media bans, the ability to quantify both the impact and effect of such rules—including via polling and the sort of quantifiable research that Australia’s regulator is undertaking—will be critical in assessing if such policymaking leads to a reduction of children’s use of social media.
Australia was the first to implement a nationwide minimum age requirement. The Tech Policy Press global tracker shows that other jurisdictions are moving in parallel, with differing approaches.
The United Kingdom is the most closely watched comparison. A national consultation, open until May 26, seeks views on whether there should be a social media minimum age and, if so, what that age should be. The current government has been a vocal proponent of a potential nationwide ban.
The Children's Wellbeing and Schools Bill has been in parliamentary "ping pong," with the House of Lords twice voting to insert an outright under-16 ban and the House of Commons favoring government amendments that would enable regulatory powers without mandating a statutory ban. The consultation extends the policy menu considerably beyond what Australia debated. It asks specifically about overnight curfews, algorithmic restrictions, addictive design features, and daily screen time limits alongside the ban question itself.
Indonesia has moved in a direction structurally closer to Australia's. It is already encountering platform resistance. Its regulation was signed on March 28, with liability placed on platforms rather than children or parents.
But compliance gaps emerged almost immediately: within days of the law taking effect, Jakarta summoned officials from Meta and Google, with the Communications Minister describing them as "non-compliant with the law" and warning that sanctions or a platform block could follow. The speed of that enforcement response stands in contrast to Australia's more methodical evidence-gathering process, though whether Indonesia's more assertive posture translates into durable compliance remains to be seen.
In Austria, proposals are being developed for a lower age threshold, focusing on children under 14. This approach aligns more closely with existing US frameworks and may reduce some verification challenges associated with older teenagers. The European Commission is similarly considering the consolidation of multiple proposed bans across the 27-country bloc within the Brussels-based institution to secure a one-size-fits-all approach for the entire European Union.
Across all these jurisdictions, several structural patterns hold.
Platform liability is nearly universal. In every case, penalties fall on companies, not on children or parents who circumvent the rules — reflecting both a political calculation and a practical one. Age assurance technologies, particularly facial age estimation, are widely deployed despite documented accuracy constraints near age thresholds, as Australia's compliance report made explicit. And the gap between legislative enactment and functional enforcement is a consistent feature, not an exception.
Such efforts to determine a social media user’s age have drawn heckles from privacy campaigners worldwide. So-called age verification and age assurance technologies, according to campaigners, may undermine people’s anonymity online. They may also lead to potential data leakages of people’s sensitive information if such technologies are not appropriately overseen.
In Malaysia, the government has set a June enforcement target for its under-16 rule but is still finalizing how platforms will verify users' ages using government-issued ID documents. Brazil, where a new child safety law just took effect, but implementation has been delegated to an under-resourced regulator; and India, where state-level restrictions are advancing and national age-verification proposals are under consideration alongside serious civil society concerns about surveillance creep.
Differences remain in regulatory design. Some countries are introducing access restrictions based on age thresholds. Others are examining whether targeted obligations on platform features and systems can address similar concerns. Australia’s early compliance data highlights the operational challenges associated with enforcing age limits at scale.
The Tech Policy Press Global Social Media Age Restriction Tracker is updated intermittently with new developments. Spot an update? Contact us.
Authors

