US Senators Should Learn from Australia’s eSafety Commissioner Ahead of Big Tech Hearing
Justin Hendrix, Gabby Miller / Jan 30, 2024On Wednesday, Jan. 31, the United States Senate Judiciary Committee will host a hearing on online child safety, including platform responses to CSAM and child exploitation, with the CEOs of Discord, Meta, Snap, TikTok, and X (formerly Twitter). Executives from Google, Apple, and Microsoft are notably missing from this list. The Chairman of the Committee, Senate Majority Whip Dick Durbin (D-IL), and its Ranking Member, Senator Lindsey Graham (R-SC), say the leaders of these platforms “are being forced to acknowledge their failures when it comes to protecting kids.”
For skeptics who expect yet another string of false promises and red herrings from executives and lawmakers, it’s hard to be optimistic about this hearing. Since 2017, Congress has hosted nearly forty hearings on the issues of children and social media, according to a November 2023 count by Columbia Law School professor and former White House advisor Tim Wu. But while little has changed in Washington, what is new are regulatory frameworks, like the European Union’s Digital Services Act and the UK Online Safety Act, that require tech firms to disclose more granular information than what is typically extracted in Congressional hearings.
One such regulatory regime of note is in Australia. Established in 2015 after the passage of legislation called the Enhancing Online Safety Act and given additional responsibilities with the passage of the Online Safety Act 2021, the eSafety Commissioner is charged with regulating online harms such as cyberbullying, image based abuse, and illegal online content, such as Child Sexual Abuse Material (CSAM). Now years into its mandate, the eSafety Commissioner has produced a substantial amount of information that any legislator planning to interrogate a tech CEO should read closely before entering the hearing chamber.
For instance, an October 2023 report published by the Australian Government’s eSafety Commissioner found that the major social media companies surveyed were failing to take basic steps to address child sexual exploitation and abuse (CSEA) across various categories. The report, titled “Basic Online Safety Expectations,” provided a summary of mandated responses from Discord, Google, TikTok, Twitch, and Twitter in compliance with Australia’s Online Safety Act 2021. Platforms’ shortcomings generally fell into three categories: what known detection technologies were being utilized, which service features were being scanned, and what types of CSEA were being proactively detected.
Of the five platforms surveyed in the recent eSafety report, the executives leading three of them will testify before Congress on Wednesday, including Discord, TikTok, and X. The responses these companies provided to eSafety should form the basis for more nuanced questions that Senators can ask. Consider some of the findings from the report relevant to the hearing:
- Video scanning: Scanning images for known CSAM is more or less a solved problem, but tech firms are struggling with how to identify known CSEA and CSAM in videos. The Australian report found that TikTok currently scans videos using hash-matching tools, while Discord and X do not use any tools to detect known CSEA videos on their services. What accounts for the difference in approach?
- Proactive detection on X: In the three months following Elon Musk’s purchase of then-Twitter, proactive detection of CSEA fell from 90 percent to 75 percent. Since the ownership change, more than 6,000 employees, or around 80 percent of its workforce, have been laid off. X told eSafety that its proactive detection has since improved, although X is facing a AUD$615,000 fine for its failure to provide complete and accurate responses in compliance with the Act. How is X fixing the problem?
- Detecting grooming: TikTok told eSafety it uses language analysis technology across public parts of its services and in direct messages. In contrast, X reported that it doesn’t use any tool to detect grooming on any part of its platform. Why not?
- Livestream detection: TikTok has measures in place to detect live-streamed CSEA, whereas Discord does not. When faced with what it does to detect live-streamed CSEA, X “did not provide the information required.” What accounts for these disparate approaches?
- Moderation across languages: TikTok has human moderators operating in more than 70 languages, Discourse in 29, and Twitter in 12 languages. What does that mean for child safety when all languages in use on these platforms are not resourced?
- Community moderation: Discord is partly community moderated. However, when a volunteer moderator identifies CSEA material on Discord, professional safety staff are not automatically notified. Why not?
Australia’s report is particularly useful because the regulator asks specific, targeted questions designed to acquire meaningful information. These include asking providers to list out exact steps taken to prevent and detect live-streamed child abuse and avoid amplifying harmful content through recommender systems.
eSafety reported that some providers were less than cooperative with its inquiry. Google’s answers were, in certain instances, “not relevant” or “generic,” and the company was issued a formal warning against non-compliance. X’s violations were “more serious,” according to eSafety; Musk’s company left some boxes entirely blank and provided incomplete or inaccurate responses to other questions.
In sum, the utility of Australia’s approach may be instructive as the US considers federal legislation for child online safety. But the skeptics may still have a point. If companies were taking a safety by design approach and assessing the risks upfront, as many claim they are, it would not require so many retrofitted well-being and parental controls.
It’s likely that many initiatives and “solutions” the CEOs will boast about on Wednesday before the Senate Judiciary Committee are, in practice, inadequate fixes that are fundamentally at odds with these systems’ architectures. Expect to also see vague promises in lieu of actionable responses to lawmakers’ questions. If Congress wants more, it will need to pass a law requiring firms to provide specific, granular information and be honest about their shortcomings and technical challenges. Until then, lawmakers will need to get in the habit of reading transparency reports generated by other governments, including Australia but also the European Union, United Kingdom, and more.