Home

Donate

Initial Takeaways from the Canadian AI Safety Institute Launch

Matthew da Mota, Duncan Cass-Beggs / Dec 12, 2024

Canada's Minister of Innovation, Science and Industry Francois-Philippe Champagne speaks on stage during the UK Artificial Intelligence (AI) Safety Summit at Bletchley Park, in central England, on November 1, 2023. (Photo by LEON NEAL/POOL/AFP via Getty Images)

The newly minted Canadian AI Safety Institute (CAISI) is an essential building block in Canada’s AI landscape and capacity to contribute to AI governance on the global stage. The launch of CAISI reveals many of the features recommended by experts, including by CIGI in April. However, several questions remain about whether the Institute will prioritize the most significant safety risks of next-generation AI systems, whether the Institute will engage in the necessary related AI policy and governance work, and what role safety will play in Canada’s broader AI strategy and ecosystem.

The Launch

Innovation Minister François-Philippe Champagne officially launched CAISI on November 12, 2024, backed by a $50 million commitment from April's federal budget. The institute will focus on AI safety research, developing AI risk assessment tools, and risk mitigation. CAISI joins counterparts in the UK, USA, and Japan in the international AI Safety Institute (AISI) Network launched in May. The AISI Network’s Mission Statement, released at its first meeting this November, highlights research, testing, guidance, and inclusion as key focuses of their work, with a mandate to collaborate and coordinate AI safety research globally.

CAISI’s Structure

Under the oversight of Innovation Science and Economic Development Canada, CAISI has two research streams to address risks posed by synthetic content and “the development or deployment of systems that may be dangerous or hinder human oversight.” The Canadian Institute for Advanced Research (CIFAR) Applied Research stream will fund multidisciplinary research on immediate and long-term frontier AI risks at Canada’s three AI research hubs, Amii, Mila, and the Vector Institute, as well as other labs applying for funding. The National Research Council stream will focus on government-led research in cybersecurity and international AI safety, which will likely include classified projects.

Remaining Questions

While the launch of the CAISI is a valuable step, several key questions will determine whether it fully meets its potential.

Prioritizing the most important global safety risks

AI poses many potential safety risks that deserve the attention of policymakers. However, we believe that CAISI should focus primarily on the most severe global-scale risks that countries can only address through collaboration. Global catastrophic risks of AI, such as loss of control or misuse of advanced AI systems, are increasingly becoming a concern for leading scientists and governance thinkers, some of whom suggested the establishment of an international body to govern such risks in September. It is important that CAISI prioritize these risks, both for the sake of ensuring the safety and security of Canadians and for securing a place for Canada at the center of international discussions on what are likely to be the most consequential issues of our time.

Including research on AI governance

Many of the most pressing challenges in AI safety are technological in nature, such as evaluating the risks of AI systems and developing safe-by-design AI. However, AI safety also raises challenging international governance questions. Who should decide when and under what conditions to permit the development of AI systems with dangerous capabilities, how should societies determine the acceptable balance between public safety risks and potential benefits of AI deployment, and how can governments work together to prevent global risks or respond to emergencies caused by AI? We believe that a portion of CAISI’s research should be focused on mobilizing Canada’s considerable expertise in related social science, humanities, and legal disciplines to address these questions through interdisciplinary research.

Canada Could (and Should) Lead on AI Safety

Finally, it’s in Canada’s strategic interest to adopt a strong commitment to safe, secure, and trustworthy AI. Given that developing next-generation general-purpose AI systems may cost tens of billions of dollars, Canada will likely not be the home to the next OpenAI or Anthropic. However, it is also unclear whether advanced AI systems can be made sufficiently reliable to be developed and deployed at scale without posing severe risks to humanity. In this context, there may be greater benefits from developing smaller, more robust, reliable, and safe-by-design AI systems.

Canadians can leverage our strengths to support research into AI safety, verification methods, evaluations, governance, diplomacy, and other key areas that simultaneously advance global AI safety and Canadian innovation. Canada has the talent, world-class institutions, a history of innovation in AI, a strong reputation in governance, and the capacity to build the necessary infrastructure to be a global leader in ensuring that future AI systems are both safe and beneficial for all.

Related Reading

Authors

Matthew da Mota
Matthew da Mota is a senior research associate and program manager for the Global AI Risks Initiative at the Centre for International Governance Innovation.
Duncan Cass-Beggs
Duncan Cass-Beggs is Executive Director of the Global AI Risks Initiative at the Centre for International Governance Innovation (CIGI).

Topics