NIST Unveils Draft Guidance Reports Following Biden's AI Executive Order
Gabby Miller / May 3, 2024Monday, April 29, 2024 marked 180 days since US President Joe Biden issued his Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, and with it has come a flurry of activity at the Commerce Department. Of significance are the four draft guidance reports that the National Institute of Standards and Technology (NIST) published, which are “intended to help improve the safety, security and trustworthiness of artificial intelligence (AI) systems.”
These reports, which range from updating NIST’s existing AI Risk Management Framework (RMF) to developing AI standards for global engagement, are substantial but incremental. Together, there are more than 200 pages of guidelines and recommendations that begin to sort through the landscape of AI technologies and their respective development and uses.
“For all its potentially transformative benefits, generative AI also brings risks that are significantly different from those we see with traditional software,” said Under Secretary of Commerce for Standards and Technology and NIST Director Laurie E. Locascio in the announcement’s press release. “These guidance documents will not only inform software creators about these unique risks, but also help them develop ways to mitigate the risks while supporting innovation.”
Although there are limits to the series of draft guidelines, Helen Toner, director of strategy and foundational research grants at Georgetown University’s Center for Security and Emerging Technology (CSET), finds this work foundational – especially given the context. “NIST is notoriously underfunded, understaffed. There's been reporting about how at their main headquarters, their buildings are literally crumbling because of a lack of resources,” Toner said. This reality, combined with a six-month deadline from the White House, is “really quite heroic.”
Toner believes that the four draft publications are a step in the right direction. “They're not intended to be – and are not trying to be – sort of comprehensive or really solving problems. Instead, they're kind of laying more bricks in the wall that hopefully will help us both understand the technology better, and also have sort of a shared vocabulary, shared set of concepts to work with,” she said.
This week also represents an opportunity for public engagement in the process of AI governance in the US. At the top of each draft publication, NIST advertises its public comment period, welcoming feedback specific to each of the reports’ topics. At the time of publication, the public comment period is active; all comments must be received on or before June 2, 2024. Below are summaries for each of the four draft guidance documents.
Reducing Risks Posed by Synthetic Content
The NIST report on “Reducing Risks Posed by Synthetic Content” provides an overview of the technical approaches to “digital content transparency,” or the process of documenting and accessing information about the origin and history of digital content.
By far the longest report, it takes a four-pronged approach to managing and reducing the risks of synthetic content, including:
- Attesting that a particular system produced a piece of content
- Asserting ownership of content
- Providing tools to label and identify AI-generated content
- Mitigating the production and dissemination of AI generated child sexual abuse material (CSAM) and non-consensual intimate imagery (NCII) of real individuals.
Some of the standards, tools, methods, and practices it examines include content authentication and provenance tracking, like digital watermarking and metadata recording, as well as technical mitigation measures for synthetic CSAM and NCII, like training data filtering and cryptographic hashing.
Toner believes the technical specificity of this report is useful for clarifying and pulling together a chaotic and scattered series of discussions around synthetic content. “This is a really nice compendium of like, here are a bunch of different things that people are trying and here are the strengths and weaknesses of these different approaches,” she said. NIST also doesn’t put forth a set of solutions. Instead, it focuses on outlining opportunities for further research and development – a strength of the report given the uncertainties around these AI technologies, according to Toner.
NIST is seeking feedback on its report for Reducing Risks Posed by Synthetic Content regarding current state of the art data tracking techniques and technical mitigations for preventing CSAM and NCII that were not included in the report, among other requests.
A Plan for Global Engagement on AI Standards
The “Plan for Global Engagement on AI Standards” calls for a coordinated effort to work with key international allies and partners and standards developing organizations to create and implement AI-related consensus standards. The Plan explains that standards are crucial in both the development and adoption of new and emerging technologies, especially in AI, where international actors are looking to the standards ecosystem for guidance. NIST sees its role, at least at this stage, as addressing activities before, during, and after the creation of a formal standard.
NIST’s report emphasizes that safety, interoperability, and competition regarding technology can only be achieved if standards are widely accepted and implemented. “While some AI standards will be required by government regulations, their effectiveness generally will depend on organizations to voluntarily adopt those standards – which they will do only if they find the relevant standards implementable and useful,” the report says.
Because both AI standards as well as the underlying technology are at such immature development stages, the report underscores the importance of grounding a standard in a rigorous technical foundation. Were a standard to “get ahead of the underpinning science,” it may prove unhelpful, counterproductive, or incoherent, the report says. Thus, NIST guides AI standards to be developed only where a “science-backed body of work exists.”
This Plan’s objectives for engagement include AI standards that are:
- Accessible and amenable to adoption
- Reflect the needs and inputs of diverse global stakeholders
- Developed in a process that is open, transparent, and driven by consensus
- Strengthen international relationships
NIST also identifies three categories for prioritizing standardization and accelerated study:
- Urgently needed and ready for standardization
- Needed, but requiring more scientific work before standardization
- Needed, but requiring significant foundational work
Toner finds these categories of prioritization particularly useful. “People have been calling for AI standards, and for the US to lead on AI standards, for years and there's been a lot of agitation of always trying to get to set the standards,” she said. “But it was always very unclear in this conversation, like what are we actually standardizing? When you say a standard, you have to be standardizing something.” This document is helpful in that it lists out specific areas and “compresses them into a sort of almost readiness level,” Toner added.
The Executive Order further calls for the Department of Commerce to establish a plan for global engagement, meaning how US standards stakeholders can interact with international partners. Some of its recommendations include developing and widely sharing tools to assist with implementing standards and guidelines as well as encouraging horizontal standards applicable across sectors.
NIST is seeking feedback on its Plan for Global Engagement on AI Standards regarding the prioritization of topics for standardization work as well as activities and actions, among other requests.
AI Risk Management Framework: Generative Artificial Intelligence Profile
The “Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile” draft report is intended to manage the risks of GAI and serves as the “companion resource” to NIST’s AI Risk Management Framework (AI RMF), as compelled by the Biden Administration’s Executive Order. The AI RMF, which was originally released in January 2023, is for voluntary use by companies and organizations and strives to better incorporate trustworthy considerations into AI products, systems, and services.
The GAI Profile is meant to serve as a “use-case” and “cross-sectoral profile” of the AI RMF that helps organizations decide how best to manage AI risk in alignment with their goals, legal and regulatory requirements, and best practices, according to NIST. It was informed by public feedback and consultation from NIST’s AI Public Working Group of more than 2,500 volunteer technical experts, as well as the Frameworks’ initial Request for Information (RFI) process.
The report introduces and describes ten risk categories identified by the Public Working Group, and provides a set of actions to help organizations govern, map, measure, and manage them. These categories include:
- Chemical, biological, radiological, or nuclear (CBRN) weapons information
- Confabulation
- Dangerous or Violent Recommendations Data Privacy
- Environmental impacts
- Human-AI Configuration
- Information Integrity
- Information Security
- Intellectual Property
- Obscene, Degrading, and/or Abusive Content Toxicity, Bias, and Homogenization
- Value Chain and Component Integration
In a series of tables, the GAI Profile outlines the exact action organizations must take to address risks across categories like “Information Security” and “Toxicity, Bias, and Homogenization.” These action tables map with the categories and subcategories laid out in NIST’s RMF. The GAI Profile also denotes categories it considers to be “foundational,” or what NIST considers as “the minimum set of actions to be taken” for GAI risk management.
The report is also careful to explain that many GAI risks are unknown. The potential GAI scale, complexity, and capabilities, along with the wide range of GAI stakeholders, uses, inputs, and outputs, makes it difficult to properly scope or evaluate any associated risks, according to the report. It also notes that risk estimation challenges are aggravated by both “a lack of visibility into GAI training data” as well as the immature state of AI measurement and safety science.
NIST is seeking feedback on the Generative Artificial Intelligence Profile’s glossary terms, risk list, and actions.
Secure Software Development Practices for Generative AI and Dual-Use Foundation Models
This Secure Software Development Framework (SSDF) community profile is the companion resource for incorporating secure development practices for generative AI and dual-use foundation models. Its purpose is to provide both tailored recommendations and “a common language for describing secure software development practices” specific to these technologies. It also leverages and consolidates numerous sources of expertise, like NIST’s report on Adversarial Machine Learning and its January 2024 workshop on “the unique security challenges of AI models.”
Much of the Profile is a chart that provides recommendations and references for a practice, why it may be beneficial for AI model development, and a task associated with performing a practice. It also categorizes each task by priority, including low, medium, and high levels of relative importance.
NIST is seeking feedback on the Secure Software Development Practices community profile regarding patent claims.
Related Reading: