Home

Donate

Five Takeaways from the NIST AI Risk Management Framework

Jessica Newman / Jan 26, 2023

Jessica Newman, Director of the AI Security Initiative, housed at the UC Berkeley Center for Long-Term Cybersecurity, and the Co-Director of the UC Berkeley AI Policy Hub, explores how a new resource can help organizations develop more trustworthy AI tools.

Today, the National Institute of Standards and Technology (NIST), an agency of the United States Department of Commerce, released Version One of its long-awaited Artificial Intelligence (AI) Risk Management Framework (RMF). The launch has been years in the making, following an initial mandate by Congress in 2020 for NIST to lead the charge, which subsequently led to a request for information, a concept paper, two drafts, and three public workshops, each of which attracted hundreds of participants from around the world.

As we grapple with the implications of flawed but powerful generative AI models like ChatGPT and Stable Diffusion, managing the risks of AI is more important than ever. The AI RMF provides a new, shared framework that can improve transparency and accountability for the rapid development and implementation of AI throughout society.

Throughout its process, NIST has sought broad feedback and engagement from domestic government partners and international allies; representatives from industry, academia, civil society; and the American people. The AI RMF is intended for voluntary use to address risks in the design, development, use, and evaluation of AI products, services, and systems in support of trustworthy AI.

The AI RMF may seem daunting, but the structure is relatively simple and mirrors previous NIST frameworks regarding cybersecurity and privacy. The framework breaks down the AI risk management process into four core functions: "govern," "map," "measure," and "manage." Each of the functions is then broken down into categories and subcategories, which define the key components of the function. A supplemental AI RMF Playbook provides more specific actions and resources available to help actors address each subcategory.

Figure 1. The AI RMF Core.

The AI RMF is designed to be, and is, highly compatible with other AI risk and trustworthy AI frameworks, including ongoing work by the OECD AI Policy Observatory, the EU AI Act, ISO/IEC 23894, and the White House Blueprint for an AI Bill of Rights. Nonetheless, the AI RMF is unique because it provides in-depth, voluntary guidance primarily intended for developers and users of AI systems. The AI RMF provides a new baseline risk management structure for any organization developing, using, procuring, or evaluating AI systems.

What are the most important highlights from this long-awaited release? Here are five key takeaways that are critical in the context of the broader AI governance landscape.

1. Governance is the cornerstone.

It is not a coincidence that the first of the core functions is “Govern”, or that it shows up at the center of the diagram of the AI RMF Core. Governance is the core of the AI RMF Core. Or, as it states, “Governance is intended as a cross-cutting function to inform and be infused throughout the other three functions.” The govern function covers creating a culture of risk management, developing processes for managing risks, and aligning AI risk management processes with existing risk management efforts, principles, policies, values, and legal requirements. It provides guidance on developing an inventory of AI systems, training employees, ensuring equity, inclusion, and accessibility, and implementing external feedback into AI system design and implementation, among many other topics. This matters because it highlights that, while technical evaluation and testing is critical, it is not possible or sufficient without a robust organizational governance structure in place first. No one should write off the AI RMF as only being intended for technical AI developers. The AI RMF specifies that “AI systems are inherently socio-technical in nature,” and that mitigating the risks necessarily includes interventions at the human and organizational level.

2. Risks should be assessed at every level (from individual to planetary).

The AI RMF is not only concerned with reducing enterprise risk. Yes, it can reduce the risks to companies and organizations, but it recognizes that a key part of doing that is by reducing negative impacts on the external world. Importantly, that is scoped broadly, in recognition of the novel scope and scale of AI impacts. This is specified in what is arguably one of the most important subcategories in the AI RMF, “Map 1.1”. The subcategory calls for users to analyze the context of the AI system in the real world, and to document the “potential positive and negative impacts of system use to individuals, communities, organizations, society, and the planet.” Other subcategories reiterate this need. For example, “Measure 2.12” calls for the assessmentand documentation of the environmental impact and sustainability of the AI model under consideration. The broad scope necessitates the need for multidisciplinary and diverse input, including from people not part of the developing team, as reiterated in multiple subcategories throughout the AI RMF Core.

3. All AI systems benefit from risk management processes (not just "high risk" systems).

The AI RMF is designed to be useful for all AI systems and use cases. It is adaptable for all organizations, sectors, and use cases. The process of identifying, measuring, and documenting risks can, however, help with prioritization of risks. The AI RMF specifies that the highest risks call for the most urgent prioritization, and that risks that are low in a specific context may potentially call for lower prioritization. Additionally, the framework specifies that higher prioritization may be warranted for AI systems that directly interact with humans, for example if the AI system is trained on human data or impacts human experience or decision-making.

In general, the AI RMF does not prescribe risk tolerance, but does specify that if significant risks are uncovered, it may be more appropriate to stop development or deployment rather than attempt a risk mitigation strategy. It states, “In cases where an AI system presents unacceptable risk levels – such as where negative impacts are imminent, severe harms are actually occurring, or catastrophic risks are present – development and deployment should cease in a safe manner until risks can be sufficiently managed.”

4. Documentation is everything.

A notable element of the AI RMF is the insistence on documentation. There are dozens of subcategories within the AI RMF that call for documentation, ranging from organizational roles and responsibilities to the AI system’s knowledge limits. This is a critical intervention that the AI RMF introduces because documentation of AI systems today is often haphazard and non-standardized. The AI RMF has set a baseline expectation of what organizations should be documenting about their AI systems at every stage of the AI lifecycle. Moreover, the AI RMF Playbook points users to guidelines and templates that provide guidance about how to document each consideration. Improving documentation will play an important role in improving transparency and accountability that is sorely needed from AI organizations. It will also help organizations prepare for the rise of external audits and adhere to transparency requirements, for example in the EU AI Act.

5. Connecting AI trustworthiness to AI risks is complicated, but important.

The AI RMF defines seven “characteristics of trustworthy AI,” which include: valid and reliable, safe, secure and resilient, accountable and transparent, explainable and interpretable, privacy-enhanced, and fair with harmful biases managed. These characteristics are integrated into the AI RMF Core in a couple of ways, including in “Govern 1.2” and “Measure 2”, which call for integrating the characteristics of trustworthy AI into organizational policies, and for evaluating trustworthy characteristics. However, the relationships between the trustworthy characteristics and the rest of the risk management framework is not explicitly defined or explored.

In a white paper released today by the UC Berkeley Center for Long-Term Cybersecurity, we provide a supplementary resource for the NIST AI RMF that helps to bridge these elements. The paper, called “A Taxonomy of Trustworthiness for Artificial Intelligence: Connecting Properties of Trustworthiness with Risk Management and the AI Lifecycle,” was published following a year-long collaboration with AI researchers and multistakeholder experts.

Using NIST’s characteristics of trustworthiness as a starting point, we name 150 properties of trustworthiness that provide greater nuance and detail about each characteristic. We then map those properties to particular parts of the AI lifecycle where they are likely to be especially critical, and to the relevant subcategories from the NIST AI RMF core functions where readers can go to find specific resources and tools to address the property. This helps to connect specific elements of each characteristic of trustworthiness to specific parts of the AI RMF Core, for example helping researchers of AI usability or accessibility quickly identify what are likely to be the most important subcategories.

The CLTC paper also provides an important counterpart to the NIST AI RMF because it provides greater detail about what is at stake for the realization of trustworthy AI characteristics. It includes a broader range of issues, such as the ability to opt out, which is a key principle of the White House Blueprint for an AI Bill of Rights, but is not made as explicit in the AI RMF. The paper may serve as a tool for teams to help assess how they are incorporating properties of trustworthiness into their AI risk management process at different phases of the AI lifecycle.

The launch of the AI RMF is an important milestone in the story of AI governance in the United States. As with NIST’s cybersecurity and privacy frameworks before it, implementation of this new framework is not going to be quick or straightforward. Still, the AI RMF provides an important shared starting place for any organization to reduce the risks of AI development and deployment.

We are in a moment of rapid technological advances and increased access to powerful public, private and open-source models, but with the current lack of regulation or standards, these models are being deployed despite significant limitations and potential for enormous harm. If organizations work to thoughtfully implement the AI RMF into their existing risk management processes, it will not only help them make and use more trustworthy AI systems, it will also reduce the risks to their workers and organizations, and to their communities, society, and environment. With so much on the line, implementation of this comprehensive framework can’t happen soon enough.

Authors

Jessica Newman
Jessica Newman is the Director of the AI Security Initiative, housed at the UC Berkeley Center for Long-Term Cybersecurity, and the Co-Director of the UC Berkeley AI Policy Hub. Her work focuses on the governance, policy, and global security implications of artificial intelligence. Jessica has publi...

Topics