AI Countergovernance: Lessons Learned from Canada and Paris
Blair Attard-Frost, Ana Brandusescu, David Gray Widder, Christelle Tessono / Feb 20, 2025This essay is part of a collection of reflections from participants in the Participatory AI Research & Practice Symposium (PAIRS) that preceded the Paris AI Action Summit. Read more from the series here.

France and India hosted the Paris AI Action Summit on February 10-11, 2025. Source
Amidst cocktails, fireside chats, and endless panels, governments and industry representatives worked hard to shape AI governance at the Paris AI Action Summit. Meanwhile, communities and workers impacted by AI are pushed to the margins of AI governance with little funding and no decision-making power. How can we oppose AI governance when it fails to help those who stand to lose the most from the technology’s deployment?
Our thinking about that question is shaped by “countergovernance,” conceptualized by Rikki John Dean as a process of citizen opposition against state power. Blair Attard-Frost extends this concept to “AI countergovernance,” a process through which communities and workers resist government and industry AI governance initiatives that do not serve their interests.
Attard-Frost examines four cases of community and worker-led resistance against AI governance. One of those cases is the failure of Canada’s Artificial Intelligence and Data Act (AIDA). The AIDA was a piece of proposed AI legislation tabled in Parliament in 2022 that received widespread public backlash. Critics voiced many concerns about the AIDA. Notably, they complained AIDA lacked meaningful consultation in drafting the legislation, with public records showing extensive engagement with industry stakeholders. Critics also expressed concern that the AIDA had a limited scope, lacked specificity in defining key terms like “high-impact system,” established insufficient and skewed powers for a new regulatory body, and created barriers to justice for individuals and groups harmed by AI systems.
Advocacy organizations, unions, and policy researchers resisted this legislation in several ways. We organized open letters to Members of Parliament. We submitted briefs to the parliamentary committee studying the legislation on the lack of proper oversight, accountability mechanisms and human rights protections; the lack of protection for the creative industry and artists; and a missed opportunity for shared prosperity regarding workers’ protection, public consultation, and accountability. We also appeared before the committee as witnesses, where Ana Brandusescu and Christelle Tessono offered testimonies about the AIDA’s flaws.
We advanced alternative policy proposals for extensive amendments to the AIDA or for entirely different visions for Canadian AI regulation. We engaged in public audits and evaluations of the AIDA’s legislative process to hold the government to account for its failures. We engaged with journalists and communities to raise awareness about the existence of this legislation and the impacts of AI. This resistance helped stall the AIDA in a committee study until the legislation died with the prorogation of Parliament just last month, in January 2025.
We learned three lessons from Canada’s failure to pass national AI legislation. These lessons can be applied to oppose unaccountable state-led AI governance around the world.
Lesson 1: Successful AI countergovernance emerges from strong interpersonal relationships and ad hoc coalitions.
This moment is about learning from spaces and movements outside of tech policy. It’s about getting to know how others have organized and continue to organize. We can look to the Tech Workers Coalition and the #TechWontBuildIt movement. We can learn from the organizing we do in our local communities like the latest Boycott Amazon campaign in Quebec as well as the ongoing student organizing for Palestinian liberation against military and surveillance AI. We can encourage public servants to speak up about AI harms anonymously by creating trust and support networks, and we can include labor and civil society organizations like the Canadian Labour Congress or the Coalition for the Diversity of Cultural Expressions in our efforts.
Lesson 2: Localized governance initiatives can be more effective: They allow workplaces, organizations, and communities to build resources and power to determine if, when, and how AI should be adopted within specific contexts.
We can learn from how the people of Toronto said no to physical instances of Big Tech power by rejecting the Sidewalk Labs/Google/Alphabet takeover of our waterfront for a “smart city” development project. We can learn to say no to AI in our workplaces, campaign against tools like predictive analytics, and reject their use in hiring processes and worker surveillance. We can equally reject using facial recognition for law enforcement, for filing our taxes, or for receiving an unemployment check.
Lesson 3: AI awareness and education should come from the bottom up: communities can build knowledge of how AI impacts them and can be regulated by them.
This is our chance to push back against the status quo of AI literacy campaigns. We can focus on critical thinking literacy and question decisions made by those in power. We can encourage and support a discourse of not being afraid to ask why, and to ask why in groups. We can decide what kind of campaigns are needed to make labor more inclusive, to protect everyone in the workforce and actively link this protection to race and gender.
The Paris AI Action Summit showed us that “participatory AI” and “public interest AI” are often co-opted by government and industry to advance their own agendas for geopolitical and economic power, mass surveillance, and national security. To resist AI governance agendas that fail to serve the public interest, the world can look to lessons learned from Canada’s resistance against undemocratic AI legislation.
Authors



