Home

AI Researchers, Executives Call for Pause on Training New Systems

Justin Hendrix / Mar 29, 2023

Justin Hendrix is CEO and Editor of Tech Policy PressThe views expressed here are his own.

The Future of Life Institute, a Pennsylvania nonprofit funded and advised by Elon Musk that says its "mission is to steer transformative technologies away from extreme, large-scale risks and towards benefiting life," has launched an open petition that calls "on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4."

The more than 1,000 signatories to date include a number of luminaries in the field of artificial intelligence, such as University of Montréal Turing Laureate Yoshua Bengio, NYU researcher and entrepreneur Gary Marcus, and University of California Berkeley researcher Stuart Russell, as well as current and former tech executives such as Musk, Apple cofounder Steve Wozniak, and Stability AI CEO Emad Mostaque.

The letter argues that given AI's potential impact on civilization, there is a necessity for careful planning and management of its release. But that "level of planning and management is not happening," says the letter,

Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders.

The letter argues that experts should develop safety protocols "for advanced AI design and development that are rigorously audited and overseen by independent outside experts." They do not call for a complete pause on AI development, but rather a "stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities."

"The letter isn’t perfect, but the spirit is right: we need to slow down until we better understand the ramifications," Gary Marcus told Reuters.

"It would be more credible and effective if its hypotheticals were reasonably grounded in the reality of large machine leaning models, which, spoiler, they are not," said Alex Engler, a fellow at the Brookings Institution who studies the implications of artificial intelligence and emerging data technologies on society and governance. But, he said, "I strongly endorse independent third party access to and auditing of large ML models - that is a key intervention to check corporate claims, enable safe use, and identify the real emerging threats."

Responding on Twitter, University of Washington computational linguist Emily Bender said that while the letter is "dripping with AI hype," there are aspects of the policy proposals she agrees with. "I'm glad that the letter authors & signatories are asking 'Should we let machines flood our information channels with propaganda and untruth?' but the questions after that are just unhinged AI hype, helping those building this stuff sell it."

Princeton computer scientist Arvind Narayanan, author of a book called AI Snake Oil, tweeted similar concerns. "This open letter — ironically but unsurprisingly — further fuels AI hype and makes it harder to tackle real, already occurring AI harms. I suspect that it will benefit the companies that it is supposed to regulate, and not society."

The Future of Life Institute is backed by Musk's foundation and, on its website, says it maintains a relationship with the Partnership on AI (PAI), a nonprofit coalition of university and industry partners committed to the responsible use of artificial intelligence. OpenAI, the firm that developed GPT-4, is a PAI partner. Generative Pre-trained Transformer 4 (GPT-4) is a multimodal large language model launched by OpenAI on March 14th. It is available to the public in limited form in the service ChatGPT Plus, and there is a waitlist for its commercial API.

While there is no federal AI regulation in the United States, in January, the National Institute of Standards and Technology (NIST), an agency of the United States Department of Commerce, released an Artificial Intelligence (AI) Risk Management Framework (RMF). citing "risks that can negatively impact individuals, groups, organizations, communities, society, the environment, and the planet."

And, last fall, President Joe Biden’s White House published a 73-page document produced by the Office of Science and Technology Policy (OSTP) titled Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People. That document referenced the possibility that some systems should not be deployed if appropriate risk identification and mitigation has not been completed:

Automated systems should be developed with consultation from diverse communities, stakeholders, and domain experts to identify concerns, risks, and potential impacts of the system. Systems should undergo pre-deployment testing, risk identification and mitigation, and ongoing monitoring that demonstrate they are safe and effective based on their intended use, mitigation of unsafe outcomes including those beyond the intended use, and adherence to domain-specific standards. Outcomes of these protective measures should include the possibility of not deploying the system or removing a system from use.

Suresh Venkatasubramanian, a computer science professor at Brown University and a former advisor to OSTP who coauthored the Blueprint, is supportive of the idea of a pause, but says the letter fails to go far enough to meet the moment.

"I think the idea of a pause, as a collective action by researchers working in this space, would be a good idea. I think that a pause coupled with a plan for what would happen next is a good idea. And I think a pause that includes systems currently deployed is a good idea. And I think a pause that acknowledged all the ways in which existing AI systems are causing harms already and that acknowledges why guardrails already proposed are important and should be adopted would be fantastic," Venkatasubramanian told Tech Policy Press. But, he added, "I don't see this letter doing any of this."

While the U.S. federal government has so far produced only guidelines, frameworks, and the threat of more substantial regulatory oversight, other jurisdictions are moving to pass new laws to govern AI. The National Conference on State Legislatures notes that "general artificial intelligence bills or resolutions were introduced in at least 17 states in 2022, and were enacted in Colorado, Illinois, Vermont and Washington."

But perhaps the most significant efforts are outside the U.S. The European Union has a much more developed regulatory approach to AI. Its horizontal AI Act is expected to be completed this year. The AI Act will "create a process for self-certification and government oversight of many categories of high-risk AI systems, transparency requirements for AI systems that interact with people, and attempt to ban a few 'unacceptable' qualities of AI systems," according to a report by Engler for Brookings.

And today, the United Kingdom's Secretary of State for Science, Innovation and Technology issued a white paper on its approach to AI regulation. It notes that "public trust in AI will be undermined unless these risks, and wider concerns about the potential for bias and discrimination, are addressed."

The Future of Life Institute's letter initially suffered an authenticity problem of its own. The Future of Life Institute had to remove signatures that were falsely submitted, including that of OpenAI CEO Sam Altman, former Microsoft CEO Bill Gates, and others. At the time of this writing, one signatory is listed as "John Wick, The Continental, Massage therapist," a reference to a fictional character played by Keanu Reeves.

Authors

Justin Hendrix
Justin Hendrix is CEO and Editor of Tech Policy Press, a new nonprofit media venture concerned with the intersection of technology and democracy. Previously, he was Executive Director of NYC Media Lab. He spent over a decade at The Economist in roles including Vice President, Business Development & ...

Topics