Tim Bernard is a tech policy analyst and writer, specializing in trust & safety and content moderation.
On September 6, the House of Lords held the third reading of the UK Online Safety Bill, the first draft of which was published in May 2021. It’s been a long time coming—the original white paper that preceded the bill was published in 2019, with a preceding strategy paper dating back to 2017.
Over the years, the bill has more than doubled in length to around 300 pages, inspired both vehement criticism and strong support, and been beset by multiple delays during the British government’s leadership turmoil. Now, the bill will return to the House of Commons for a final reading on September 12, when Members of Parliament will consider the amendments supported by the Lords.
As one might expect from its length, the scope of the bill is huge. Many requirements and offenses have been added on over the years by different ministers and under pressure from various interests both in and out of government. Not much has changed in recent months, and few backbench amendments have been accepted. The government’s overview from this past January of its proposed changes is a helpful guide to the most significant changes.
Key Elements of the Bill
One of the long-standing—and largely uncontroversial—elements of the legislation is a series of requirements for internet services to conduct risk assessments for various harms (large platforms have similar duties under the EU’s DSA). This accompanies a duty of care to prevent users being harmed on their platforms.
There are also special strictures regarding protecting children from various harms, including design code-type requirements, the restriction of many kinds of hateful and graphic content, and age verification for pornographic sites.
Again parallel to the DSA, platforms are to promptly remove material that is illegal; however, unlike the EU’s regime, they are expected to make the judgment themselves for certain priority categories, and the obligation only applies after they have been informed of a content artifact’s illegality by a third party for non-priority categories, as in the EU.
Defined types of journalistic and other material “of democratic importance” have procedural and substantive protections from content moderation actions.
Several new criminal offenses by users are also established: false communications (intentionally sharing misinformation with intent to harm), threatening communications, sending content to cause seizures, encouraging self-harm, and sharing non-consensual or unsolicited intimate imagery.
The managers of companies that fail to comply with regulator instructions or deliberately mislead regulators are also subject to criminal penalties, including fines and imprisonment.
Selected Changes and Controversies
- The Bill’s most controversial historical section was related to content that is “legal but harmful to adults.” This has been removed.
- Platforms are expected to remove content if they have “reasonable grounds to infer” that the content is illegal (for priority categories of illegality). There is concern that this standard for adjudicating illegality will lead to platforms over-moderating due to fear of legal risk. The current bill text does not change the standard, but adds some more detail and requires the regulator, Ofcom, to release further guidance promptly.
- Priority illegal content is not restricted to consensus matters like child sexual abuse material and terrorism. The list (detailed in Schedule 7) famously includes even offenses relating to illegal immigration and encouraging any of the listed offenses.
- The Secretary of State is empowered to add priority illegal categories, though amendments to the bill have required additional criteria and transparency procedures in order to do so.
- The special measures relating to children will require age assurance on many websites and apps, which has raised privacy concerns. The current text of the bill instructs Ofcom to study the issue and to create a code of practice that lays out what methods or verification or estimation are required in which contexts.
- The most public recent controversy regarded endangering end-to-end encryption (E2EE). The Online Safety Bill, similar to an early draft of an EU regulation, allows Ofcom to impose “accredited technology” solutions on platforms to detect child sexual exploitation and abuse content and/or terrorism content. Signal and WhatsApp both threatened to withdraw service from the UK if this passed without a guarantee that they would not have to implement any circumventions of E2EE such as client-side scanning.
On the eve of the final discussion of the Bill in the Lords, the government minister responsible for the legislation’s passage in the upper house made a statement seeming to acknowledge that no sufficiently accurate and privacy-preserving technology currently exists to scan E2EE communications. While Signal President Meredith Whittaker claimed this as a victory, other reporting emphasizes that the legislation has not been changed and argues that the government’s comments make no significant new concessions. (This thread evaluates the opposing conclusions in some detail.)
The Conservative government maintains a healthy majority in the Commons for now, and the Labour Opposition has generally supported the passage of the Online Safety Bill. Accordingly, there is every reason to expect that next week will see the approval of the bill with no significant further changes. On gaining royal assent (usually a couple of months after the Commons approves the bill), some aspects of the bill will go into effect immediately, and the others as determined by the Secretary of State, presumably in coordination with Ofcom.
Ofcom has been staffing up in recent months in anticipation of the implementation of the new regime and will be required to draft multiple and varied codes of practice and other guidance documents, which will determine how the law should be interpreted in practice, and in effect, what it will be like to use the internet in the UK in coming years.
Tim Bernard is a tech policy analyst and writer, specializing in trust & safety and content moderation. He completed an MBA at Cornell Tech and previously led the content moderation team at Seeking Alpha, as well as working in various capacities in the education sector. His prior academic work includes an MA in Talmud and a BA in Philosophy.