Julie Inman Grant is Australia’s eSafety Commissioner.
There is nothing new about hate speech or the ignorance and misguided perceptions of superiority that drive it. Hate has been a weapon used for centuries to devastating effect, attacking people on the basis of race, belief or identity.
What is new is the proliferation of online channels by which hate can go viral. Online invective can be launched like a ballistic missile, hitting its designated mark and disseminating its toxin to millions of people in an instant.
That said, one of the greatest gifts of social media is the voice it has given to those who previously had none, serving as a great leveller and allowing anyone to speak truth to power.
Due to the spontaneous, open and viral nature of Twitter, I once believed that no other platform held such promise of delivering true equality of thought and free expression. On the back of the remarkable Arab Spring transformation, sometimes referred to in the Middle East as the “Twitter Revolution,” I was so convinced of the company’s potential for positive social change that I went to work there in 2014.
Fast forward to today, and I oversee the independent statutory agency charged with regulating platforms like Twitter to limit harm to Australians from adult cyber abuse, child cyberbullying, image-based abuse, and illegal content. eSafety serves as a safety net, protecting Australians from a range of online harms when the platforms fail to act.
And Twitter does appear to be failing: failing to confront the dark reality that the platform is increasingly being used as a vehicle for disseminating online hate and abuse.
eSafety received more complaints about online hate on Twitter in the past 12 months than any other platform. In fact, close to a third of all complaints to eSafety about online hate occurred on Twitter.
I am concerned that this may be linked to Twitter’s “general amnesty,” offered last November to around 62,000 permanently banned account holders. To be permanently banned from Twitter means repeated and egregious violations of the Twitter Rules. Seventy-five of these reinstated abusive account holders reportedly have over 1 million followers, meaning a small few may be potentially contributing to an outsized impact on the platform’s toxicity.
If this wasn’t concerning enough, Twitter has drastically reduced its global workforce. This includes deep cuts to its trust and safety personnel (how deeply, we aim to find out) and ceasing all local public policy representation here in Australia.
As someone who started Twitter’s public policy function here, I know how critical the role is in responding to government investigations but also for explaining to Twitter HQ in San Francisco our local context, culture and colloquialisms. Without this real-time knowledge transfer, Australian reports of targeted abuse will increasingly fall through the cracks.
It’s for these reasons that, today, I sent a legal notice to Twitter under Australia’s Online Safety Act, requiring answers about what it is actually doing to prevent online hate from spreading on its service.
I want to know how Twitter is enforcing its hateful conduct policy and just how many of the accounts previously banned for hate have been allowed back onto the platform – and continue to perpetuate abuse at-scale, and with relative impunity.
We’re already aware of reports that the reinstatement of some of these previously banned accounts has emboldened extreme polarisers and peddlers of outrage and hate, including Australian neo-Nazis.
eSafety is far from being alone in our concern about increasingly levels of toxicity and hate on Twitter, particularly targeting marginalised communities.
Last month, U.S. advocacy group GLAAD designated Twitter as the most hateful platform towards the LGBTQ+ community as part of their third annual social media index.
Research by the U.K.-based Center for Counter Digital Hate (CCDH) demonstrated that slurs against Black Americans showed up on Twitter an average of 1,282 times a day before Musk took over the platform. Afterwards, they jumped on average to 3,876 times a day. The CCDH also found that those paying for a Twitter Blue Check seemed to enjoy a level of impunity to Twitter rules governing online hate, compared to non-paying users and even had their Tweets boosted by the platform’s algorithm.
eSafety’s own research shows that shows that nearly 1 in 5 Australians have experienced some form of online hate. This level of online abuse is already inexcusably high, but if you’re a First Nations Australian, you are disabled or identify as LGBTIQ+ you experience online hate at double the rate of the rest of the population.
Without transparency about how Twitter’s own rules are set and enforced, or how their algorithms or Twitter Blue subscriptions are further enabling the proliferation of online hate, there is a real risk that bad actors will be allowed to run rampant on the platform.
As the world’s first and longest standing online harms regulator, eSafety cannot sit idle and leave already marginalised voices abused, sidelined and ultimately silenced.