Regulating Artificial Intelligence Must Not Undermine NIST’s Integrity
Keegan McBride / Aug 20, 2024Dr. Keegan McBride is a Lecturer in AI, Government, and Policy at the Oxford Internet Institute and an Adjunct Senior Fellow in National Security and Technology at the Center for a New American Security.
The United States is the global leader in the development of AI and is well-positioned to influence AI’s future trajectories. Decisions made today in the US will have a long-lasting impact, both domestically and globally, on how we build, use, and experience AI. However, recent legislative proposals and executive actions on AI risk entangling the National Institute of Standards and Technology (NIST) in politically charged decisions, potentially calling the organization’s neutrality into question.
This is an outcome that must be prevented. NIST plays a key role in supporting American scientific and economic leadership in AI, and a strong, respected, and politically neutral NIST is a critical component for supporting America’s leadership in technological development and innovation.
For over a century, NIST has helped advance American commerce, innovation, and global technological leadership. NIST’s experts have developed groundbreaking standards, techniques, tools, and evaluations that have pushed the frontier of measurement science. Today, almost every product or service we interact with has been impacted by the “technology, measurement, and standards provided by the NIST.” More recently, in the context of ongoing global AI competition, NIST has also been active in developing important standards for AI-based systems.
Key to this success has always been NIST’s ability to keep politics away from science, remaining neutral, and focusing on what it does best: measurement science. Now, in the name of AI Safety, many emerging proposals would task NIST with conducting and evaluating AI-based systems themselves. These risks are further compounded by the introduction of an increasingly politicized AI Safety Institute (AISI). Though these points might seem trivial, the long-term implications are significant.
NIST could find itself in the middle of a new political battleground as competing interests lobby and compete to achieve results more aligned with their specific beliefs and agendas. For example, if NIST is tasked with directly evaluating specific AI systems, there would be clear incentives to try to produce or influence the outcome. This would challenge NIST’s objectivity and undermine its standing as a scientific leader both domestically and internationally. This must not be allowed to happen.
This is not a hypothetical risk, and initial cracks have already begun to show.
In March 2024, staff at NIST reportedly threatened to resign after Commerce Secretary Gina Raimondo appointed former OpenAI researcher Paul Christiano as the new head of AI Safety at the AISI. An article in VentureBeat, citing anonymous sources "with direct knowledge of the situation," said staff believed the appointment could “compromise the institute’s objectivity and integrity” due to Christiano's connection to controversial figures and ideas in the effective altruism movement. University of Washington professor Emily Bender, a critic of effective altruism, told Ars Technica that "NIST has been directed to worry about these fantasy scenarios" because of language that made its way into the Biden administration's executive order on AI. Beyond the emerging risks to the neutrality of the important work done at NIST, the institution also has clear financial pressures and lacks the resources it needs for its responsibilities. The controversy around the AISI may exacerbate these pressures.
It does not have to be this way. It is essential to support NIST in its mission instead of distracting and undermining it. A strong NIST will continue to help build standards that are adopted globally and lay the foundation for further American AI innovation and dissemination.
The world needs new methodologies, frameworks, and tools to evaluate and monitor increasingly advanced AI systems. These are crucial for furthering AI deployment throughout the economy. NIST should play a key role in the creation and support of these standards. The AISI does have a role to play in all of this, and NIST probably is likely a good home for it.
However, an increasingly politicized AISI must not be allowed to bring down the historical legacy of NIST’s neutral, essential work in the process.