Home

Five Big Problems with Canada’s Proposed Regulatory Framework for “Harmful Online Content”

Daphne Keller / Aug 31, 2021

Something terrible is happening in Canadian Internet law.

The Department of Canadian Heritage has proposed a new legal framework to deal with “harmful” content. The framework would establish new regulatory entities with broad authority over speech and information shared on platforms like Twitter or Facebook. The rules it creates for platforms sound good on paper, but that’s about it. They disregard international experience with past laws and similar proposals around the world, as well as recommendations from legal and human rights experts inside and outside of Canada.

Canada’s proposal has laudable goals, like preventing online radicalization and protecting vulnerable groups including women, LGBTQ+ communities, people with disabilities, and Indigenous Peoples. But the legal mechanisms proposed for achieving those goals have major problems. Experts in Canada such as Michael Geist, Professor of Law and Canada Research Chair in Internet and E-commerce Law at the University of Ottawa, have pointed out that the proposal barely references free expression or other rights established in the Canadian Charter of Rights and Freedoms—despite establishing sweeping new rules about what Canadians can say and see on the Internet.

Indeed, as Geist notes, the proposal’s requirements for content takedown, filtering, and even website blocking for its five listed categories of online content read like a list of the worst ideas from recent legislative proposals around the world. Human rights groups like Human Rights Watch, Access Now, and Article 19 have been fighting requirements like these one at a time in countries like India, Turkey, and Russia. Canada’s proposal combines them all together in one package.

Here are the five main problems with the proposal:

1. 24-hour takedown provision.

The draft framework says platforms must respond to claims that user content is illegal within twenty-four hours. That’s much faster than the most notorious existing high-speed takedown mandate, in Germany’s NetzDG law. Germany requires 24-hour takedowns for “manifestly” unlawful content, but gives platforms 7 days to more carefully assess speech that isn’t obviously illegal. That rule is controversial, and has been condemned both by German experts and by international observers who note the law’s role as a model for Internet speech crackdowns in countries like Venezuela, Vietnam, and Belarus.

Why do high-speed takedown mandates matter? We know that even under more lenient systems, platforms systematically err on the side of taking down lawful content in order to avoid risk to themselves. Fear of liability also gives them even more reason to give up on honoring the law altogether, and simply use private Terms of Service to prohibit broad swaths of legal speech. Canada's proposed law would have penalties up to three percent of global revenue or $10 million. Combine that with 24-hour takedown requirements, and you have a recipe for massive over-removal of lawful speech and information as platforms err on the side of takedowns.

2. Proactive monitoring and filtering.

Canada’s proposed framework requires platforms to “take all reasonable measures, which can include the use of automated systems” to identify and block the five categories of “harmful” content. This is the kind of proactive monitoring – aka filtering – idea that has had civil society and human rights advocates ringing alarm bells in Europe for years. A much narrower, more speech-protective filtering requirement in the EU Copyright Directive is currently being challenged before the EU’s highest court as a human rights violation. Another EU proposal, which would have required filtering for terrorist content—and would also have been much narrower than what Canada is proposing—was scrapped after it drew condemnation from UN human rights officials, human rights prosecutors, and civil society groups.

As many of these organizations pointed out, automated filters can’t tell the difference between truly illegal content (like a terrorist recruitment video) and that same content re-used for news reporting, education, counter-speech, and more. Deploying flawed automated tools to police online information threatens Internet users’ ability to speak about topics of critical public importance, and can undermine their rights to privacy and equality before the law. Looking beyond human rights, requirements like this also have competitive consequences. YouTube may be able to invest US$100 million in filtering technologies, and spend still more on armies of content moderators to correct filters’ mistakes. The smaller competitors who may one day challenge today’s incumbents can’t do that.

3. Platform reporting to law enforcement.

Canada’s proposed framework says platforms must preserve information about people who might have shared illegal content in the law’s five categories (or future categories to be defined by regulators). Then platforms must report those people to law enforcement. Reporting requirements like that exist in some countries when users share extremely dangerous things like child sexual abuse material—content that is both uniquely harmful and has uniquely low risk of “false positives,” in which innocent users are reported to police for engaging in lawful speech. Expansive reporting requirements like Canada’s, sweeping in new categories of speech under vague legal standards, create a much higher risk of such errors. This approach was unknown outside of authoritarian countries until just this year, when Germany enacted a similar reporting rule—one that is being challenged in German courts.

What makes Canada’s plan truly unprecedented, though, is its combination of two novel requirements. Platforms must both use pervasive content filtering to monitor users’ every word and report the results of this privatized dragnet surveillance to the police. That increases both the sheer number of innocent people like to be swept up, and the consequences for all Internet users. It effectively deputizes platforms to invade users’ privacy and free expression rights in ways that the government, acting alone, cannot.

The human rights consequences of this privatized surveillance are sure to fall disproportionately on less powerful groups in society. We have every reason to expect people of color and other marginalized or vulnerable groups to face more suspicion, be reported to police more, and be mistreated more after that happens. Disparate impact can start with unfairness baked in to AI or other filtering tools. It can be exacerbated by the biases—conscious or not—of platforms’ human content moderators. Unwanted law enforcement attention may be partricularly threatening to vulnerable groups like undocumented immigrants, parolees, or sex workers, leading them to self-censor or worse. The US has been coming to terms with this problem in its last platform law, SESTA/FOSTA—to the point that Elizabeth Warren and other Members of the U.S. Congress have called for a formal assessment of harms to sex workers.

4. Sweeping regulatory powers.

The proposed framework would create a set of new regulatory bodies, a tribunal, and a Commissioner who can order platforms to "do any act or thing... necessary to ensure compliance with any obligations imposed on [them] by or under the Act[.]” This regulatory carte blanche is backed up by expansive inspection authority to enter platforms’ premises and examine “any data contained in or available to” their global computer systems. As a means to restrict speech without judicial review, this kind of regulatory authority would be a glaring prior restraint problem in many countries. It’s also very hard to reconcile with free expression protections under the Inter American Convention on Human Rights. Canada isn’t a signatory, though, so technically that body of human rights law is not a problem. (The U.S. isn’t, either.)

5. ISP Blocking.

Finally, the proposed framework provides for ISPs blocking sites that don't comply with the law. The last time the US tried to do this, in a law called SOPA/PIPA, multiple UN and regional human rights officials wrote to object. In some other parts of the world, law has been shifting to tolerate site-blocking in extreme cases, like where an entire site is dedicated to counterfeiting or piracy. But this seems to be about blocking a different kind of site—the kind that hosts a mix of legal and illegal speech posted by users. Site-blocking in this situation is an incredibly blunt tool. It’s the kind of thing that has repeatedly gotten Turkey and Russia into trouble before the European Court of Human Rights.

- - -

Those five things aren’t the only problems with Canada’s proposed law. It also covers a remarkably broad array of technical intermediaries, applying rules designed for Facebook or YouTube to much smaller entities. It seems to cover infrastructure providers, too, despite their limited capacity to engage in anything but the bluntest forms of content suppression. It invites lawmakers to adopt new legal definitions of “harmful” speech, beyond those already established in Canadian law. The operational rules it establishes for content moderation teams at platforms are not what anyone with experience in that field would be likely to recommend. The list goes on. But really, the first five problems should be enough to alarm anyone paying attention.

Comments on the Canadian proposal are due September 25th. Experts like Professor Geist are pretty discouraged about whether the comments will make any difference. But lawmakers would be wise to listen to him; to other Canadian experts like Emily Laidlaw, Vivek Krishnamurthy, and Tamir Israel; and to international civil society groups that have worked on these issues for many years. The harms the proposal identifies are real and deserve serious consideration. But this draft is the wrong way to address them. Canadians, and the international community concerned with these issues, should pay attention before it’s too late.

Authors

Daphne Keller
Daphne Keller directs the Program on Platform Regulation at Stanford’s Cyber Policy Center, and was formerly the Director of Intermediary Liability at CIS. Her work focuses on platform regulation and Internet users’ rights. She has published both academically and in popular press; testified and part...

Topics