Home

Donate

The UK’s Online Safety Bill: Where is it Now?

Tim Bernard / Nov 15, 2022

Tim Bernard recently completed an MBA at Cornell Tech, focusing on tech policy and trust & safety issues. He previously led the content moderation team at Seeking Alpha, and worked in various capacities in the education sector.

The UK’s sweeping Online Safety Bill to counter digital harms covers many of the (real or perceived) problems of the contemporary internet ecosystem, such as how to regulate consensual and non-consensual pornography, content moderation and appeals, transparency, and the trafficking of goods and humans. The Bill puts emphasis on protecting children, preventing the spread of terrorist propaganda, and reducing the prevalence of both illegal and “legal but harmful” content. The Bill would appoint the communications regulator Ofcom as the enforcement agency for a new “duty of care” that would be placed on internet services in scope of the legislation, i.e. a responsibility to avoid harm coming to their users via their products.

Earlier this year, the Bill was introduced and looked likely to pass into law with cross-party support after going through the standard parliamentary processes. But with the recent turmoil in Westminster, it’s a good time to ask where it stands, as well as to review some key critiques.

Timeline

This legislative process is still some distance from completion, and the Bill continues to inspire both impassioned advocacy and intense critique. Consequently the future of the Bill is far from certain.

Critiques

Despite widespread support from the public and the opposition—likely rooted in the “do it for the children” motivation and the conventional wisdom that social media does great harm and needs to be regulated (and / or punished)—there has been criticism of the Online Safety Bill from many different quarters. Some of the most pertinent:

  • Perhaps the most commonly critiqued aspect of the Bill in its published versions is its attempt to rein in content that is generally agreed to cause harm, even though it is not against the law. The UK has laws prohibiting hate speech and threats, but other examples of “legal but harmful” content might include self-harm information, graphic imagery, or misinformation. The Bill leaves it to Ofcom to clarify whether large tech platforms are suitably fulfilling their duty of care by conducting risk assessments and explaining how they are mitigating risk.

Despite the government’s insistence that platforms will not be required to remove any specific content unless it is illegal, but rather would be required to institute compliant systems and processes, critics from civil liberties organizations, legal experts, as well as “free-speech” conservatives (most notably the UK Secretary of State for International Trade, Kemi Badenoch) have balked at the prospect of the government, even if indirectly, setting standards for acceptable legal speech. Ultimately, the Bill allows the Secretary of State to direct Ofcom, which is an independent agency, to adjust its standards to follow the government’s policy agenda.

The House of Lords report, “Free for all? Freedom of expression in the digital age,” recommended, inter alia, that this category be removed from the law, and that the most serious harmful offenses be criminalized in separate legislation instead, so that there will be no distinction between what is illegal in real life and what is prohibited online.

  • In an attempt to protect democratic discourse, the Bill requires platforms–again, just the largest ones–to protect “journalistic content” and “content of democratic importance.” The definitions of these categories, whether by the law, Ofcom guidelines, or platform rules is bound to invite strife when there is a disagreement as to whether a particular individual, organization, or piece of content is included or excluded.
  • The responsibilities of platforms with regard to children are substantially more expansive, and in order to comply, a provider must make use of some kind of age verification procedure. This has potential negative implications for the privacy of all users, as critics of the California Age Appropriate Design Code have outlined in the context of that law. It has also been suggested that this requirement (and possible others too) are a calculated boon to age verification service providers based in the UK.
  • An implied general monitoring responsibility, without an exemption for private communications (including end-to-end encryption), poses serious privacy concerns. This aspect of the Bill has prompted a great deal of concern from digital rights groups around the world.
  • The Bill requires the swift removal of illegal content from services. This places a high burden on online services to determine legality, which sometimes involves factors like intent, and is therefore very difficult to ascertain. In addition, illegality is not consistently defined within the UK, as the constituent nations diverge in some important laws. Recent amendments clarify that the providers must remove material when they have “reasonable grounds to infer” that it is illegal. Most concerningly, this standard runs the risk of causing platforms to over-censor, in many cases requiring the use of automated processes, and thereby impede the expression of what is, in fact, legal speech.
  • Although only the largest platforms will be required to deal with the most esoteric categories such as legal but harmful content, promoting democratic discourse, and transparency reporting, all services that host user-generated content (potentially including Mastodon instances) will be in scope of some potentially very burdensome duties of care. These may overwhelm small businesses, while further entrenching deep-pocketed incumbents.

One further element of the legislation that is worthy of mention, is its emphasis on Safety by Design. This approach, also championed by the Australian eSafety Commissioner, calls for products and product features to be designed with user-safety in mind so that harmful material is less likely to be posted or viewed, rather than by relying on post-facto moderation. Praiseworthy though this approach may be, it is unclear how well compliance with a policy about design principles can be enforced in a regulatory framework, though the general idea has also been implemented in the UK Age Appropriate Design Code, which inspired the similar law just passed in California that was referenced above.

Recent Developments

Though the political process began before her tenure, the Online Safety Bill was presented to Parliament and shepherded along by former Culture Minister, Nadine Dorries. The Bill was dropped from Parliament business while then Prime Minister Boris Johnson’s government imploded during the summer, and when Johnson was replaced as Prime Minister, Dorries resigned her post.

During the first Conservative leadership contest (which also featured a vehement opponent of the Bill, Kemi Badenoch), both the frontrunners, MPs Liz Truss and Rishi Sunak, expressed general support for the Bill, but also serious reservations about the “legal but harmful” provisions. Truss appointed Michelle Donelan to lead the Department for Digital, Culture, Media and Sport during her brief tenure as PM, and Rishi Sunak kept Donelan on when he replaced Truss as Prime Minister.

Also of note, a coroner’s report on the 2017 death of Molly Russel, a 14-year-old girl from London, was issued at the end of September. The coroner directly blamed Instagram and Pinterest for serving her with self-harm material and cited this as among the causes of her death. This has reignited enthusiasm for the Bill. On October 20, Donelan called it her “top priority” and said that it will return to the House “imminently.” However, the scheduled third reading for November 1 was removed from Commons business.

Anonymously-sourced reports in recent weeks have claimed that Donelan will indeed bring back the Bill within weeks—amended to remove responsibilities around “legal but harmful” content that is “harmful to adults,” while retaining obligations related to content which is “harmful to children” (whatever that distinction actually means in practice). However, Sunak has declined to commit to a timeframe for bringing back the Bill in public and sources within the Department for Digital, Culture, Media and Sport have warned that further delays to the Bill could imperil its passage. In the light of the difficult political and economic climate in the UK, and serious opposition from within the Conservative Parliamentary ranks, it remains unclear exactly when the Online Safety Bill will return, and in what form.

- - -

The outcome for the Online Safety Bill in the UK is important both within and beyond its borders. This legislation can be thought of as part of a third approach—distinct from the EU’s comprehensive new DSA regime and the US’s laissez-faire status quo—in other non-authoritarian, mostly English-speaking countries that are adopting similar regulations. Ofcom has just joined with the regulators from Australia, Fiji and the Republic of Ireland to form the Global Online Safety Regulators Network, which seeks “to pave the way for a coherent international approach to online safety regulation.” Other countries currently in the process of adopting online safety legislation include Canada, New Zealand, and Singapore.

To follow more in depth discussion of problems with the Online Safety Bill, I recommend the writings of two commentators, both strong critics of the bill, but with different emphases:

  • Heather Burns, tech policy and regulation specialist: Twitter | Blog
  • Graham Smith, technology and Internet-focused lawyer: Twitter | Blog

And two supporters of the Bill to follow:

Authors

Tim Bernard
Tim Bernard is a tech policy analyst and writer, specializing in trust & safety and content moderation. He completed an MBA at Cornell Tech and previously led the content moderation team at Seeking Alpha, as well as working in various capacities in the education sector. His prior academic work inclu...

Topics