Polymer

Download free DLP for AI whitepaper

Summary

  • NIST’s AI Risk Management Framework (AI RMF) is a tool for responsible AI development.
  • It has two parts: Part 1 covers understanding AI risks, including harm to individuals, companies, and society.
  • Part 2 (the Core) includes functions like GOVERN, MAP, MEASURE, and MANAGE to manage AI risks effectively.

The National Institute of Standards & Technology (NIST) made an exciting move on January 26, 2023, by releasing Version 1.0 of the Artificial Intelligence Risk Management Framework (AI RMF). This framework is a promising, reputable tool for organizations looking to navigate the world of artificial intelligence (AI) ethically.

Here’s what you need to know.

What is the NIST AI risk management framework?

The AI RMF, which was crafted by NIST in collaboration with key stakeholders, offers a valuable and adaptable tool for organizations venturing into the world of AI. It’s all about ensuring that as AI systems and technologies evolve, they do so responsibly, with a keen eye on minimizing harm to individuals and society.

NIST’s approach is designed to empower organizations to mitigate risks, seize opportunities, and elevate the trustworthiness of their AI systems from the very beginning of development through deployment.

The framework is broken down into two core parts: 

1. Planning and understanding: This first section focuses on providing organizations with guidance on how to assess the risks and benefits of AI and how to define what makes an AI system trustworthy. NIST suggests evaluating systems against these key characteristics:

  • Valid and reliable
  • Safe, secure, and resilient
  • Accountable and transparent
  • Explainable and interpretable
  • Privacy-enhanced
  • Fair, with harmful biases managed

2. The core framework: The second half of the document, often referred to as the “core” of the framework, lays out the four fundamental steps organizations should take to responsibly incorporate into their AI development process. These steps are: 

  • Govern: This central step fosters a culture of ongoing risk management within the organization.
  • Map: It helps identify and map risks in the context of AI development.
  • Measure: This step involves analyzing, assessing, and monitoring the identified risks.
  • Manage: Finally, it prioritizes risks based on their potential impact, allowing organizations to apply appropriate mitigation measures.

Below, we’ll look at part 1 and part 2 of the framework in more detail. 

Part 1: Foundational information

The first part of the NIST AI Framework explores the potential risks and harms associated with AI, along with the challenges of mitigating them. The term ‘harm’ in the context of AI refers to the following: 

  • People: This category encompasses the potential harm that AI systems can pose to individuals. This harm may manifest as threats to civil liberties, infringements on personal rights, risks to physical and psychological safety, or even economic disadvantages.
  • Companies: AI can impact organizations in various ways, such as disruptions to business operations, financial losses stemming from security breaches and damage to an organization’s hard-earned reputation.
  • Society: The effects of AI reach beyond individuals, potentially rippling through interconnected elements and resources like the global financial system, disruptions in supply chains, and electrical grids. Moreover, it extends to concerns about harm to our natural resources, the environment, and the planet as a whole.

The challenges of measuring AI risks

For all the potential pitfalls of AI, NIST notes that measuring and managing AI risks isn’t always easy. Some of the hurdles the RMF outlines include:

  • Misaligned risk metrics: It’s highly likely that the team that develops an AI system will be different to the one deploying it. This disconnect can complicate risk assessment.
  • Stages of the AI life cycle: Different risks can rear their heads at various stages of an AI system’s life cycle. Understanding and addressing these shifting risks is a juggling act.
  • Real-world behavior: AI systems often behave differently in real-world settings compared to laboratory or controlled environments. These differences can throw a wrench into risk evaluation.

Risk tolerance and prioritization 

The NIST AI Framework is designed to be adaptable, complementing your current risk management practices while ensuring compliance with relevant compliance mandates and laws. 

To define their risk tolerance levels, organizations should follow the criteria, tolerance, and risk response defined by their specific domain, discipline, sector, or professional requirements – being mindful of existing regulations. 

If there are no established guidelines for your sector or application, it’s up to the organization to set a reasonable risk tolerance using the RMF for guidance. Of course, not all risks are created equal, and the RMF – like all NIST frameworks – emphasizes the importance of prioritization, starting with what’s deemed unacceptable risks. 

These are situations where there’s a significant likelihood of imminent or catastrophic harm. For these use cases, it’s vital to hit the pause button on the development until the risks are adequately managed.

The CRF notes that risk management is a holistic practice that goes hand in hand with robust enterprise governance. To be effective at managing AI, organizations must establish and maintain the right procedures for accountability. This includes assigning roles and responsibilities and nurturing a culture of awareness and incentivization from the boardroom to the office floor.

Part 2: Core and profiles

Now, let’s dive into the heart of the NIST AI Framework, which is found in Part 2—referred to as the “Core.” This section lays out the essential functions that organizations can use to tackle the very real threats posed by AI systems. These functions are called GOVERN, MAP, MEASURE, and MANAGE, and they are further broken down into groups and subgroups for clarity and effectiveness.

  1. GOVERN: This function is like the captain of your ship, guiding you through all phases of your AI risk management processes and procedures. It’s the compass that keeps you on course.
  2. MAP, MEASURE, and MANAGE: These functions are the dynamic trio, specialized for addressing AI-specific settings and stages within the AI lifecycle. They come into play when dealing with the nitty-gritty of AI systems.

Govern

The GOVERN function helps organizations create a culture of risk management while ensuring the responsible and trustworthy development, deployment, and use of AI systems.

Here’s a closer look at the GOVERN section in more detail: 

  • Cultivating a risk management culture: It’s all about instilling a mindset within organizations that are involved in AI, whether it’s designing, developing, deploying, testing, or acquiring AI systems. This culture encourages a responsible approach to managing AI risks.
  • Methodical approaches: GOVERN lays out the methods and processes to achieve these results. It provides a roadmap for forecasting, identifying, and managing the potential risks that AI systems might pose, especially to users and society at large.
  • Impact assessment: Part of its arsenal involves assessing the potential impacts of AI systems, ensuring that they align with organizational principles, policies, and strategic priorities.
  • Bridging values and technology: GOVERN establishes a vital link between an organization’s values and principles and the technical aspects of AI system design and development. This connection empowers those involved in the entire AI lifecycle—be it procurement, training, deployment, or monitoring—to integrate organizational values into their practices and competencies.
  • Lifecycle considerations: GOVERN takes a holistic approach, considering the entire product lifecycle, related procedures, and any legal or other challenges that might arise when dealing with third-party hardware or software systems and data.

One thing to keep in mind is that GOVERN is not a standalone function; it’s interwoven throughout the AI risk management process, ensuring that the other functions within the process incorporate its crucial components, especially when dealing with compliance or evaluation.

Map

The Map function focuses on framing risks, identifying potential pitfalls, and ensuring that AI systems are not just reliable but also trustworthy from the get-go.

This component includes: 

  • Context enhancement: MAP helps organizations boost their understanding of the contexts in which AI systems operate. This means gaining insights into the environment, the users, and the specific conditions that affect AI performance.
  • Testing assumptions: It encourages organizations to put their usage context assumptions to the test. This helps uncover any gaps or misconceptions that might be lurking.
  • Identifying abnormalities: MAP enables the identification of instances where AI systems are not functioning as expected, whether it’s within their intended context or outside of it. Detecting these anomalies early on is crucial for proactive risk mitigation.
  • Discovering positive uses: It’s not all about risks; MAP also helps organizations identify positive and beneficial applications of their AI systems. This opens doors to harnessing AI for greater good.
  • Understanding limitations: MAP shines a light on the limitations of AI and machine learning processes. This knowledge is essential for realistic expectations and avoiding overreliance on AI.
  • Recognizing constraints: It’s one thing to have AI capabilities on paper; it’s another to apply them effectively in real-world scenarios. MAP helps organizations recognize practical limitations that could have adverse effects.
  • Anticipating adverse effects: Organizations can’t afford to be blindsided by unexpected consequences. MAP encourages them to foresee and plan for possible adverse effects arising from the intended use of AI systems that are known and foreseeable.

Once organizations complete the MAP function, they should have a solid grasp of the effects of AI systems in their specific context. Armed with this knowledge, they can make informed decisions about whether to design, build, or deploy an AI system. If they choose to move forward, the MEASURE and MANAGE functions, along with the established policies and processes from GOVERN, come into play to effectively manage AI risk.

Measure

The Measure function acts as a toolkit for assessing and quantifying AI risks, using various tools, techniques, and methodologies—whether quantitative, qualitative, or a mix of both—to analyze, assess, benchmark, and monitor AI risks and their associated impacts.

  • Comprehensive analysis: MEASURE dives deep into AI risk assessment. It leverages a range of methods to thoroughly scrutinize AI systems, from their functionality to their trustworthiness.
  • Guidance to MANAGE: The insights gathered through MEASURE provide valuable guidance to the MANAGE function, which focuses on controlling and mitigating AI risks effectively.
  • Pre-deployment testing: MEASURE emphasizes the importance of testing AI systems both before deployment and regularly thereafter. This ongoing evaluation helps ensure that AI systems maintain their trustworthiness.
  • Documenting trustworthiness: Trustworthiness is a key factor in AI risk management. MEASURE encourages the documentation of AI systems’ functionality and trustworthiness aspects to keep a comprehensive record.
  • Tracking metrics: To measure AI risks effectively, MEASURE includes tracking metrics that assess trustworthy characteristics, social impacts, and the interactions between humans and AI systems.
  • Rigorous testing: Rigorous software testing and performance evaluation procedures are at the core of MEASURE. This involves measuring uncertainty, comparing AI performance to established benchmarks, and providing structured reporting and documentation of findings.
  • Independent reviews: To enhance efficiency and reduce internal biases or conflicts of interest, MEASURE suggests independent review processes. These ensure that AI testing is conducted with integrity and transparency.

Ultimately, MEASURE equips organizations with the means to not only assess AI risks but also to quantify them. This data-driven approach allows for informed decision-making throughout the AI lifecycle.

Manage

The MANAGE function within the NIST AI framework is your roadmap for taking action on the risks you’ve identified and measured. It’s all about systematically and responsibly responding to, recovering from, and communicating about AI-related incidents or events. Let’s break down what MANAGE entails:

  • Assigning resources: MANAGE requires organizations to regularly allocate resources to the risks that have been mapped and measured, following the guidelines set out in the GOVERN function. This ensures that risks are actively addressed and managed.
  • Risk treatment: To mitigate the possibility of system failures and adverse outcomes, MANAGE utilizes the wealth of contextual information gathered during expert consultations and input from relevant AI actors. These insights, which are developed in GOVERN and carried out in MAP, play a pivotal role in shaping risk treatment plans.
  • Enhancing accountability: The MANAGE function builds upon systematic documentation procedures implemented in GOVERN and used in MAP and MEASURE. This documentation not only enhances accountability but also promotes transparency in AI risk management initiatives.
  • Prioritizing risk: After successfully completing the MANAGE function, organizations develop plans to prioritize risks. They also establish procedures for regular monitoring and improvement. This proactive approach empowers Framework users to effectively manage the risks associated with deployed AI systems.
  • Adapting: Importantly, the MANAGE function is not a one-and-done process. It should be continually applied to deployed AI systems as methods, contexts, risks, and the expectations of relevant AI actors evolve over time. This adaptability ensures that AI risk management remains effective and responsive to changing circumstances.

In essence, MANAGE is the action-oriented component of the NIST AI Framework. It equips organizations with the tools and strategies needed to address AI risks head-on, with a focus on accountability, transparency, and adaptability. By diligently implementing the MANAGE function, organizations can confidently navigate the evolving landscape of AI and maintain the trustworthiness of their AI systems.

What to do now?

Ultimately, the AI RMF is a great tool to assist businesses in creating a strong governance program and managing the risks associated with their AI systems. Even though it’s not mandatory under any current proposed laws, it’s undoubtedly a valuable resource that can help companies develop a robust governance program for AI – and stay ahead in the fast-moving compliance space. 


For more advice on using AI tools securely and compliantly, download our exclusive whitepaper today.

Polymer is a human-centric data loss prevention (DLP) platform that holistically reduces the risk of data exposure in your SaaS apps and AI tools. In addition to automatically detecting and remediating violations, Polymer coaches your employees to become better data stewards. Try Polymer for free.

SHARE

Get Polymer blog posts delivered to your inbox.