Home

Donate

Australia’s Online Safety Act Compels Disclosures From Tech Firms

Julie Inman Grant / Dec 23, 2022

Julie Inman Grant is Australia’s eSafety Commissioner.

It’s often said that sunlight is the best disinfectant. Today, Australia’s eSafety Commissioner has effectively thrown open the curtains on some of the world’s biggest technology companies to shine a light on what they are – and are not – doing to tackle child sexual abuse on their platforms and services.

While I’ve been asking fundamental questions of these same companies about these same issues for years, there has been a real absence of meaningful information provided about tackling illegal content on their platforms.

But under world-first powers introduced as part of Australia’s Online Safety Act 2021, we now have the legal authority to compel companies to finally give us some straight answers to straight questions.

In August, eSafety sent the first legal notices issued under the new Basic Online Safety Expectations to Apple, Meta (Facebook, Instagram, and WhatsApp), Microsoft, Skype, Omegle and Snap.

We asked each company a series of questions about how they are protecting children from sexual abuse. Their answers, contained in a landmark eSafety report released this month are nothing short of alarming.

The responses raise multiple concerns, from a clearly inadequate and inconsistent use of widely available technology to detect child abuse material and grooming, to slow response times when sexual abuse material is flagged by users. The report also reveals serious failures to prevent users banned for child abuse offences from creating new accounts to continue to harm children.

The summary of industry responses to the legal notices also confirms that Apple andMicrosoft do not even attempt to proactively detect previously confirmed child abuse material stored in their widely-used iCloud and OneDrive services.

That’s despite the ready availability of PhotoDNA detection technology – a product developed by Microsoft itself back in 2009. It is now used by tech companies and law enforcement agencies around the world to successfully scan for known child sexual abuse images and videos in a privacy preserving way.

In a setback to global efforts to protect children online, Apple has announced it is dropping its commitment to use technology to detect previously confirmed child sexual abuse material in iCloud. This is a major step backwards from its responsibilities to help keep children safe from online sexual exploitation and abuse, particularly since Apple’s client side scanning tool, developed by its privacy engineering team, was declared dead last week.

Ironically, Apple says that instead they will make it as easy as possible for users to report exploitative content – yet their response to our notice confirmed there was no way for users to report this child sexual abuse content from within their services.

Apple and Microsoft also reported they do not use any technology to detect live abuse of children in video chats on Skype, Microsoft Teams or FaceTime, despite the extensive and long-known use of Skype, in particular, for these heinous crimes against children.

This is of significant concern: some of the biggest and richest technology companies in the world turning a blind eye and failing to take appropriate steps to protect the most vulnerable from the most predatory.

Our report also unearths wide disparities in how quickly companies respond to user reports of child sexual exploitation and abuse on their services, ranging from as little as four minutes to as long as nineteen days.

Speed isn’t everything, but Microsoft taking an average of two days to act when a child is in danger is simply not good enough. Cases requiring ‘re-review’ taking on average 19 days to action is highly problematic.

There were also troubling inconsistencies between platforms owned by the same parent company.

Meta revealed if an account is banned over child sexual exploitation and abuse material on Facebook, the same user is not always banned on Instagram. When a user is banned on WhatsApp, the information is not shared with its Meta stablemates Facebook or Instagram.

This is a significant problem because WhatsApp reports banning a staggering 300,000 accounts for child sexual exploitation and abuse material each month – that’s 3.6 million accounts every year. However, eSafety does recognise that WhatsApp’s high reporting figures may indicate that the service is taking this concern seriously and has processes in place to detect and report such material.

While each provider is different, with different architectures, business models, user bases, and risk profiles, the lack of foundational levels of protection is concerning.

These are some of the wealthiest and most powerful companies in human history and they possess the intellectual capability and access to key technologies to solve these problems.

We know they must balance a range of considerations, including privacy, security and freedom of speech.

But we also know this can be done. It just requires a little extra thought and investment, , and we would like to see greater efforts from the industry in this regard. After all, is there really any consideration more important than protecting our children?

It is important to note that eSafety is not assessing these companies' performance as worse or better than their counterparts.

Looking ahead, we will continue to lift the veil of opacity and use our legal powers to require transparency and greater accountability from more companies. We will also broaden our questions to encompass a wider range of online harms.

eSafety also has broader powers that mean compliance with minimum safety standards will no longer be optional. Last month, industry submitted mandatory codes covering eight sections of the online ecosystem, and eSafety is now considering whether they meet the statutory tests.

That includes providing appropriate community safeguards in relation to matters of substantial relevance to the community, such as the safety and protection of children. If they do not, eSafety can determine an industry standard.

The era of industry self-regulation is over. The time has come for the global community to stand as one and apply unified pressure to the tech sector to do more, and to do better.

Authors

Julie Inman Grant
Australia’s eSafety Commissioner, Julie Inman Grant, leads the world’s first government regulatory agency committed to keeping its citizens safer online. Her career began in Washington DC, working in the US Congress and the non-profit sector before taking on a role at Microsoft. Julie’s experience a...

Topics