NIST AI Risk Management Framework

Why Trust Techopedia

What Is the NIST AI Risk Management Framework?

The AI Risk Management framework (AI RMF) is a set of recommendations and guidelines developed by the U.S. National Institute of Standards and Technology (NIST) that enterprises and developers can use to assess and manage the risk presented by AI solutions.?

Advertisements

At a high level, NISTs voluntary framework defines risk as AI systems that have the potential to threaten civil liberties and rights. Risks can be broken down into long-term and short-term, as well as high and low impact, low probability, systemic, and localized.?

The framework notes that AI-related risks can emerge due to the nature of the AI system itself or the way that users interact with it, so organizations need to be able to contextualize risks to mitigate potential harm.?

NIST first released version 1.0 of the AI RMF on January 26, 2023. Then on March 30, 2023, NIST launched the Trustworthy and Responsible AI Resource Center, which will take responsibility for implementing the framework in the future.?

Why Do We Need the AI Risk Management Framework?

Development of the AI RMF began after the National Artificial Intelligence Initiative Act of 2020 called for NIST to develop a set of voluntary standards for AI systems.?

NIST also developed the framework as an attempt to help organizations to respond to the rapid development of AI systems which have the potential to disrupt society and the economy.?

The organization aims to use the AI RMF to reduce the likelihood of negative impacts from AI development. It’s attempting to do this by giving organizations and regulators guidance on how to assess and manage risks presented by AI systems.?

It’s worth noting that NIST’s approach tries to support AI innovation while developing guardrails to ensure that there are no risks to society and the civil liberties/safety of ordinary citizens.?

4 Core Functions of the AI Risk Management Framework

The AI Risk Management Framework presents organizations with four core functions they can use to mitigate AI risks. These are:

  • Governance: Ensuring that an organizational culture of risk management is cultivated and present to help manage AI-related ethical, legal, and societal risks.
  • Mapping: Categorizing AI systems and mapping risks in the environment to contextual factors to better understand the potential impact on other organizations, people, and society at large.??
  • Measurement: Using a mixture of quantitative, qualitative, and hybrid risk assessment techniques to identify, assess and track risks.?
  • Management: Proactively identifying AI-related risks to manage and prioritize the remediation of risks that have the most significant potential impact.??

What Is Trustworthy AI?

NIST’s AI risk management framework argues that trustworthy AI “is valid and reliable, safe, fair, and bias is managed, secure and resilient, accountable and transparent, explainable and interpretable, and privacy-enhanced.”

While this definition of trustworthy AI is largely subjective, it is likely that the exact parameters will be further defined in future iterations of the framework.?

For now, under the AI RMF, trustworthy AI can be considered as any system built with fairness and explainability in mind, which safeguards user privacy, generates accurate insights and information, is overseen for bias and misinformation, and doesn’t present significant risks to users or society at large.

What Is Risk Tolerance?

Within NIST’s AI Risk Management Framework, risk tolerance is the level of risk that an organization is willing to accept when deploying an AI system. It is typically defined by the level of financial and legal risk that an organization can accept exposure to while working toward an internal objective.?

Although the AI RMF doesn’t prescribe risk tolerance to organizations, it can provide organizations with some basic guidance to start defining their own level of acceptable risk and develop processes to manage potential risks.?

Criticisms of the AI RMF

One of the core criticisms of NIST’s AI RMF is that the guidance issued is too high level. What constitutes risk is largely subjective, and there is limited practical guidance offered on how organizations can mitigate these threats.

Part of the reason for this is that AI is an emerging technology, and there is such a wide range of variables in how the technology is deployed and what use cases it’s designed for there’s no one-size-fits-all way to mitigate the risks it presents.?

Another key criticism is that the AI RMF highlights mapping and measurement as important areas for organizations to focus on while measuring blackbox AI isn’t possible in many cases.

Likewise, the fact that the framework is voluntary means that organizations aren’t obligated to responsibly develop and deploy AI. This will likely continue until there’s greater regulation around the use of AI or an AI bill of rights.?

Risk and Reward

While the AI RMF doesn’t replace regular auditing and external risk assessments, it provides the basics for organizations that want to start managing AI risks internally, so they can reap the rewards of technological adoption without exposing themselves to ethical or legal liabilities.?

With the guidance the framework provides, organizations can start to develop their ability to measure AI risk and make an informed judgment on what constitutes acceptable risk.

Advertisements

Related Questions

Related Terms

Tim Keary
Technology Specialist
Tim Keary
Technology Specialist

Tim Keary is a freelance technology writer and reporter covering AI, cybersecurity, and enterprise technology. Before joining Techopedia full-time in 2023, his work appeared on VentureBeat, Forbes Advisor, and other notable technology platforms, where he covered the latest trends and innovations in technology.

',a='';if(l){t=t.replace('data-lazy-','');t=t.replace('loading="lazy"','');t=t.replace(/