Home

Unpacking New NIST Guidance on Artificial Intelligence

Gabby Miller / Aug 2, 2024

President Joe Biden announcing his Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence on Oct. 30, 2023. Source: The White House.

Last Friday marked the 270-day deadline for a raft of publications released by the National Institute of Standards and Technology (NIST) as part of US President Joe Biden’s 2023 executive order on artificial intelligence. The publications provide voluntary guidelines for AI developers to “improve the safety, security and trustworthiness” of their systems and aim to mitigate generative AI-specific risks while continuing to support innovation.

The down-to-the-wire rollout included final reports on generative AI , secure software, and AI standards, some of which are follow-ups to draft reports NIST released in the spring. NIST published two additional products, including a draft guidance document from the US AI Safety Institute (AISI) meant to help software developers mitigate risks stemming from generative AI and dual-use foundation models, as well as a novel testing platform, called Dioptra, to help AI system developers measure how certain attacks degrade their AI systems’ performance.

Related Reading:

Summaries of each NIST document or product published on July 26, 2024 can be found below.

Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile

The final version of NIST’s AI Risk Management Framework Generative AI Profile (RMF GAI, NIST AI 600-1) was published on Friday. The initial draft was publicly released in January 2023, and underwent several draft versions that took into account public comments, workshops, and other opportunities for feedback. It’s meant to be a companion resource to NIST’s more comprehensive Risk Management Framework (RMF), and help organizations identify and propose actions for generative AI risks. It provides more than 200 actions across twelve different risk categories for AI developers to consider when managing risks. In March, NIST launched the Trustworthy and Responsible AI Resource Center to implement, operationalize, and facilitate international alignment with the AI RMF.

This twelve risk categories include:

  • Chemical, Biological, Radiological, and Nuclear (CBRN) Information or Capabilities
  • Confabulation, or “hallucination”
  • Dangerous, Violent, or Hateful Content
  • Data Privacy
  • Environmental Impacts
  • Harmful Bias and Homogenization
  • Human-AI Configuration
  • Information Integrity
  • Information Security
  • Intellectual Property
  • Obscene, Degrading, and/or Abusive Content
  • Value Chain and Component Integration

NIST defines risk in this context as the measure of an event’s probability of occurring and the magnitude of the consequences. This includes risks likely to materialize, and others more speculative and uncertain. Other dimensions it considers is the stage of the AI lifecycle, the source of the risk, and the time it takes for a GAI risk to materialize.

THE AI RMF Generative AI Profile is careful to explain why the document mostly focuses on current risks. It says that estimating, and therefore mitigating, unknown or speculative risks is challenging due to the a “a lack of visibility into GAI training data, and the generally immature state of the science of AI measurement and safety today.” NIST opted to take an approach rooted in “existing empirical evidence base at the time this profile was written.”

The final risk categories NIST presented in the GAI Profile were not without criticism. “Some ideas that numerous groups suggested to NIST, such as adding a risk category related to risks exacerbated by generative AI to the labor market and workplace, have unfortunately been ignored,” Jessica Newman, director of the Center for Long-Term Cybersecurity’s AI Security Initiative, told Tech Policy Press in an email. On the action side, however, Newman was pleased to see that important actions previously missing, like threat modeling, made it into the final version. The document additionally suggests actions like establishing policies and mechanisms to prevent GAI systems from generating CSAM, NCII or content that violates the law and putting protocols in place to ensure GAI systems can be deactivated when necessary.

“The Profile is more clear and comprehensive in many instances,” she said. “I am hopeful NIST will continue to engage with a broad and diverse set of stakeholders and adapt the Profile as the socio-technical landscape continues to evolve,” said Newman.

Secure Software

NIST published the final version of its Secure Software Development Practices for Generative AI and Dual-Use Foundation Models Community Profile (SP800-218A), a companion resource to the Secure Software Development Framework (SSDF, SP 800-218) it released in February 2022. The document reflects feedback from the GAI community as well as a virtual workshop on Secure Development Practices for AI Models in January 2024.

“AI model and system development is still much more of an art than an exact science,” according to the Community Profile. It blurs the “traditional boundaries between system code and system data,” which is further complicated by having to interact with these systems using “plain human language.” This forms closed loops that can be manipulated, according to NIST. The document seeks to identify how to “address these novel risks” through specific tasks, as well as add recommendations and considerations specific to GAI and dual-use foundation model development, as laid out in the SSDF. Such tasks include storing all forms of code based on the “principle of least privilege” and securely archiving relevant files and data after a software’s release.

NIST believes that the goal of developing secure software, which AI systems are built and operate on at a foundational level, is to:

  • Reduce the number of vulnerabilities in released software
  • Mitigate the potential impact of the exploitation of undetected or unaddressed vulnerabilities
  • Address root causes of vulnerabilities to prevent future recurrences

Many of the secure development practices outlined in this document are adapted from other agencies, like the Cybersecurity and Infrastructure Security Agency (CISA), which has a proven track-record of assessing digital systems’ risk, and applied to AI. “That’s very meaningful because it means that instead of [NIST] reinventing the wheel about what it means to be secure, we’re taking things that are proven and adapting them to a new context,” said Yacine Jernite, machine learning and society lead at Hugging Face. “That is a change in paradigm.”

AI Standards

The “Plan for Global Engagement on AI Standards” (NIST AI 100-5) is designed “to drive the worldwide development and implementation of AI-related consensus standards, cooperation and coordination, and information sharing,” according to the documents’ announcement. Being the third and final version, it was informed by priorities outlined in the NIST’s Plan for Federal Engagement in AI Standards and Related Tools and is tied to its National Standards Strategy for Critical and Emerging Technology.

The publication remains deliberately broad in scope, primarily because AI-standards can be designed for the needs of a particular sector while still applicable across multiple sectors; US standards stakeholders engage with many types of interlocutors across industry, civil society, foreign governments, and more; and the US is just one of many participants in a private sector-led standards ecosystem.

NIST hopes to further four sets of outcomes with this document, including:

  • Scientifically sound AI standards that are accessible and amenable to adoption
  • AI standards that reflect the needs and inputs of diverse global stakeholders
  • AI standards that are developed in a process that is open, transparent, and consensus-driven
  • International relationships that are strengthened by engagement on AI standards

The global standards document also identifies specific high-priority ways for the US government to implement these broader recommendations. Some of these include further pre-standardization research on priority topics; capacity-building at both the domestic and global level, like regularly convening stakeholders and building a global scientific network of AI standards experts; and promoting global alignment, where the US advocates for a multistakeholder-led standards ecosystem driven by global consensus. Some of these global participants, according to the document, should include representatives from governments committed to human rights-centered technical standards, like those who have signed onto the United Nations Universal Declaration of Human Rights or align their work with the IEEE’s Ethically Aligned Design vision.

Managing the Risk of Misuse for Dual-Use Foundation Models

The US AI Safety Institute, which is housed within NIST and was created to carry out priorities outlined in Biden’s AI executive order, published an initial draft report for Managing the Risk of Misuse for Dual-Use Foundation Models (AI 800-1). The report “outlines voluntary best practices for how foundation model developers can protect their systems from being misused to cause deliberate harm to individuals, public safety, and national security,” according to the Department of Commerce announcement.

AI foundation models are sometimes called “dual-use” due to their potential for both benefit and harm. This document offers seven distinct approaches for mitigating the risks of model misuse and provides recommendations on how to implement them transparently, as outlined below:

  1. Anticipate potential misuse risk
  2. Establish plans for managing misuse risk
  3. Manage the risks of model theft
  4. Measure misuse risk
  5. Ensure that misuse risk is managed before deploying foundation models
  6. Collect and respond to information about misuse after deployment
  7. Provide appropriate transparency about misuse risk

The AISI report encourages organizations to take a “holistic” approach for managing foundation models’ misuse risks. It also emphasizes that risk management is an “iterative process” and should be assessed at each point in a foundation model’s lifecycle with practices most relevant for each. This can ultimately prevent models from enabling harms like developing biological weapons, carrying out offensive cyber operations, and generating child sexual abuse material and nonconsensual intimate imagery.

NIST is accepting public comments on this draft guidance until Sept. 9, 2024, at 11:59 p.m. Eastern Time.

Testing How AI System Models Respond to Attacks

NIST’s newly released open-source software package, named Dioptra, was designed to help AI developers see how well their AI software stands up to adversarial attacks and the effects these attacks can have on machine learning models. This is driven by fears that adversaries might poison training data with inaccuracies that could lead to disastrous decisions.

Biden’s AI Executive Order asked NIST, in coordination with the Secretary of Energy and the Director of the National Science Foundation (NSF), to develop testing environments and support the design, development, and deployment of associated privacy-enhancing technologies (PETs); this software is the product of that section. NIST says the software can be additionally helpful to government agencies and small to medium-sized businesses assessing AI developers’ claims about their systems’ performances.

Authors

Gabby Miller
Gabby Miller is a staff writer at Tech Policy Press. She was previously a senior reporting fellow at the Tow Center for Digital Journalism, where she used investigative techniques to uncover the ways Big Tech companies invested in the news industry to advance their own policy interests. She’s an alu...

Topics