Home

Donate
News

Australian Regulator Girds for Fight Over Social Media Ban for Kids

Rebecca Kern / Oct 27, 2025

Julie Inman Grant, eSafety commissioner, September, 25 2023. eSafety, established in 2015, is Australia's online safety regulator. (Singapore Press via AP Images)

Come December, Australia is expected to ban Facebook, Instagram, TikTok, Snapchat, X and YouTube from allowing kids under 16 on their platforms under a first-of-its-kind law meant to force Silicon Valley giants to better protect children’s mental health.

It will be enforced by eSafety Commissioner Julie Inman Grant, an American who spent nearly two decades working for major tech companies in both countries before taking over the upstart Australian agency in 2017, where she is playing an outsize role in shaping global debates over the future of online expression and the rise of artificial intelligence.

It’s a full-circle moment for Inman Grant. As Microsoft’s second lobbyist in Washington, DC, in the 1990s, she helped push for what is arguably the United States’ most consequential tech law, known as Section 230, which shields platforms from liability over user-generated material.

Now, nearly 30 years later, she will be in charge of enforcing one of the world’s toughest laws seeking to rein in US tech giants that grew in power partly thanks to Section 230.

“I think most of them will be on board,” Inman Grant told Tech Policy Press in a wide-ranging interview earlier this month. “If they want to continue operating in Australia, this is the law.”

A preview of battles to come

Under Australia’s law — enacted last December — and a regulatory guidance released earlier in September, platforms are required to take “reasonable steps” to prevent kids under 16 from having an account and need to use age assurance techniques to identify and remove kids under 16 from their sites. Failure to comply can lead to fines up to $30 million.

Ahead of the law taking effect on Dec. 10, Inman Grant visited Silicon Valley and Los Angeles late last month, where she had largely “open and collaborative” meetings with Big Tech platforms and AI companies to gauge how they will follow the restrictions, poised to disproportionately impact US-based tech companies.

During the tour, Inman Grant met with Meta, Google, Snap and Discord, as well as AI companies OpenAI, Anthropic and Character.AI. She said the social media companies demonstrated varied levels of preparedness, with Meta the furthest along in compliance. The company later announced in mid-October that it will restrict certain content for teens up to 18, calling it its “most significant update” to teen accounts to date. (A Meta spokesperson said the company is working with the eSafety Commission ahead of the law’s implementation.)

Some tech companies have balked at the prospect, with Google threatening to sue over the inclusion of its subsidiary YouTube as a covered entity earlier this year, and X owner Elon Musk decrying the law as a “backdoor way to control access to the Internet by all Australians.” Inman Grant said she was giving Google “procedural fairness.” (Google declined to comment on its meeting with Inman Grant.)

The commission is doing a final assessment of covered platforms, and will issue a list closer to the effective deadline. “It will be a dynamic list because if a company further rolls back its trust and safety, it could be less safe,” she said.

It’s not just the Silicon Valley tech giants pushing back, though. Civil rights groups and some children’s safety advocates have raised alarms about the law infringing on users’ speech rights.

David Mejia-Canales, a senior lawyer at the nonprofit Human Rights Law Centre in Australia, called the law “a poor solution to the growing problem of misinformation, hate speech, and other harmful content that big tech platforms profit off.”

Stephen Balkam, founder of the Family Online Safety Institute (FOSI), an international nonprofit that receives funding from tech and telecom companies like Google and Comcast, said he fears the law will limit kids’ access to critical communities online.

Still, while Balkam is not supportive of the law, he’s worked with Inman Grant for many years and anticipates she’ll be able to thread the needle in implementation. "If there’s anyone in the world I would put in charge of a controversial rule, it would be Julie,” he said.

AI chatbots: ‘serving up harm on steroids’

While the law Inman Grant is tasked with overseeing was intended to focus on traditional social media platforms, she is quickly having to contend with AI tools that are becoming more popular with kids.

AI companies like OpenAI aren’t currently covered, but its newly launched Sora video generator — its first social offering — could fall under the law’s scope, she said.

Inman Grant’s meetings with OpenAI, Anthropic and Character.AI come as AI companies are starting to be confronted with the kids’ safety concerns that social media platforms have faced for years. Of the group, she said Character.AI was “very much in start-up mode” and was in the “early stages of considering age assurance.”

A Character.AI spokesperson said the company “welcomes working with regulators and lawmakers around the world as they develop regulations and legislation for this emerging space,” and it will “comply with applicable laws.”

Inman Grant has been critical of OpenAI’s track record on safety, particularly after the high-profile death of a teenager, Adam Raine, who was using ChatGPT as a “suicide coach.” During their meeting, she told the company it needs to build safety by design up front.

But in the weeks since, OpenAI’s CEO Sam Altman announced ChatGPT will allow erotica for adult users as part of its new age-gating policies in December. It also announced the launch of the latest generation of its image generator and social media app, Sora 2.

Inman Grant said she was surprised by the developments, which weren’t mentioned in their meeting. She wrote to OpenAI asking when Sora 2 will be available in Australia, and said they will assess to determine if the product should be designated an age-restricted social media service. (OpenAI did not respond to a request for comment.)

“I’m very concerned that the AI industry does not appear to be learning the lessons of the social media era of moving fast and breaking things,” she said. “Add erotica to the offering, and you’re potentially serving up harm on steroids.”

Additionally, the eSafety Commission sent letters to four AI chatbots, including Character.AI, on Oct. 23 asking how they’re protecting kids from harmful content.

Facing geopolitical crosswinds

The law arrives as major platforms in recent years have dramatically cut their trust and safety teams and scaled back their content moderation policies, and as tech CEOs like Elon Musk and Mark Zuckerberg have closely aligned with US President Donald Trump’s campaign to thwart the “censorship” of conservatives online.

“Elon Musk really helped rip the band-aid off in terms of eviscerating 80 percent of [X’s] trust and safety staff. A lot of the companies followed suit,” Inman Grant said.

It also comes as the Trump administration increasingly pressures foreign governments to scale back what they see as punitive digital rules against US companies, including by threatening to pull visas from foreign officials who encourage platforms to “censor” Americans.

Yet so far, Australia’s social media law may be dodging the brunt of the offensive. The law was spared from a Trump administration report earlier this year calling out “digital trade barriers.”

Inman Grant said Australia’s Department of Foreign Affairs and Trade has been engaging with the US government on the law. But she felt the Australian government’s efforts to regulate tech had “a lot more synergies and similarities than people would think” with US legislative efforts, including the recently signed Take it Down Act, aimed at removing AI-generated intimate imagery, and efforts to pass the Kids Online Safety Act in Congress.

‘Poacher turned gamekeeper’

Inman Grant said she never envisioned becoming a global kids safety regulator, but after 17 years at Microsoft, and shorter stints at Twitter and Adobe, she was done being an “internal safety antagonist.”

Before leaving Twitter, she said she told former CEO Jack Dorsey he couldn’t keep shifting the burden of safety onto users. “I just didn't feel like I could defend them anymore. I felt I'd very much built my brand on corporate social responsibility and safety,” she said in an earlier interview last year. She called it “serendipity” that Australia was the only country at the time with an online harms regulator, created by the Australian parliament in 2015.The government was looking for someone with tech company expertise – and so the stars aligned.

“I was drawn to following my passion around promoting online safety from a position of strength and authority, backed by a national government,” she said. “And so that's when I became the poacher turned gamekeeper.”

Inman Grant is well aware that Australia’s law is having a ripple effect across the globe, as countries like the United Kingdom, New Zealand and Ireland explore similar rules. And she knows political leaders globally are closely tracking her steps as she sets out to impose the law.

“What we're doing here represents the first domino,” she told Tech Policy Press. "The regulatory tide has turned, and we're not the only one anymore. We were the only one for seven years, and now there's a proliferation of online safety regulators, and we're all working together.”

Authors

Rebecca Kern
Rebecca Kern is a freelance writer in Washington, DC, covering tech policy issues. She previously worked at POLITICO and Bloomberg Government, and served a short stint in public affairs at the Federal Trade Commission.

Related

Analysis
Global Digital Policy Roundup: September 2025October 13, 2025

Topics