Writing this month in the journal Nature Machine Intelligence, a group of researchers from a North Carolina pharmaceutical company, the Departments of War Studies and Global Health & Social Medicine at Kings College London, and Switzerland’s Spiez Laboratory detail how artificial intelligence (AI) technologies developed for drug discovery could potentially be used to generate biochemical weapons.
The researchers, invited to consider potential security concerns in the field of “chemistry, biology and enabling technologies” at a conference hosted by the Swiss Federal Institute for NBC (nuclear, biological and chemical) Protection at the Spiez Laboratory, conducted a though experiment that “ultimately evolved into a computational proof of concept for making biochemical weapons.” Since the team is normally focused on the development of machine learning models to predict toxicity in therapeutic drugs, they note they “had not considered” how such work could just as easily be “used to design toxic molecules.”
We were vaguely aware of security concerns around work with pathogens or toxic chemicals, but that did not relate to us; we primarily operate in a virtual setting. Our work is rooted in building machine learning models for therapeutic and toxic targets to better assist in the design of new molecules for drug discovery. We have spent decades using computers and AI to improve human health—not to degrade it. We were naive in thinking about the potential misuse of our trade, as our aim had always been to avoid molecular features that could interfere with the many different classes of proteins essential to human life. Even our projects on Ebola and neurotoxins, which could have sparked thoughts about the potential negative implications of our machine learning models, had not set our alarm bells ringing.
Using a machine learning “molecule generator” dubbed “MegaSyn,” that makes predictions aimed at helping to find “new therapeutic inhibitors of targets for human diseases,” the researchers simply toggled the system away from its typical purpose “to reward both toxicity and bioactivity instead.” The system, which was trained on a public database of molecules and built on “open-source software that is readily available.”
Just six hours after the researchers set the system to work, it had generated 40,000 molecules that met the desired threshold for toxicity. Some of the molecules it generated “were predicted to be more toxic,” in fact, “than publicly known chemical warfare agents.” They note that while they did not physically synthesize any of the molecules, a little regulated “global array of hundreds of commercial companies offering chemical synthesis” exists that could theoretically have done the job.
“By inverting the use of our machine learning models,” the authors write, “we had transformed our innocuous generative model from a helpful tool of medicine to a generator of likely deadly molecules.”
The researchers conclude with a “wake-up call” to the broader community, warning that public conversation about the potential of AI and machine learning technologies could “flip” from enthusiasm for therapeutic applications to concern about risks. The researchers consider various steps that might be taken to put safety and transparency measures in place, such as the use of APIs to limit access to models and software that could be misused, creating a “hotline to authorities” to report potential abuses, more “ethical training” of science and computing students to make them aware of the dangers of the misuse of AI. and “a code of conduct” for industry to “train employees, secure their technology, and prevent access and potential misuse.”
One of the article’s authors, Kings College London’s Filippa Lentzos, has written about these types of threats in past, including in a Bulletin of Atomic Scientists post in December 2020 that considered how technological developments may “radically transform the dual-use nature of biological research, medicine, and health care and create the possibility of novel biological weapons that target particular groups of people and even individuals.”
Read the full article in Nature Machine Intelligence.
Justin Hendrix is CEO and Editor of Tech Policy Press, a new nonprofit media venture concerned with the intersection of technology and democracy. Previously, he was Executive Director of NYC Media Lab. He spent over a decade at The Economist in roles including Vice President, Business Development & Innovation. He is an associate research scientist and adjunct professor at NYU Tandon School of Engineering. Opinions expressed here are his own.