AI governance

NIST AI RMF: What Organizations Need to Know About AI Risk Management

The NIST AI RMF is the leading U.S. framework for managing AI risk, organized into four core functions — Govern, Map, Measure, and Manage. Assessments evaluate AI governance maturity for both consulting and internal audit engagements.

Zack Jones · · Updated · AI governanceNIST AI RMFrisk management

72% of organizations have deployed AI in business functions. Fewer than 30% have a governance program around it. The NIST AI Risk Management Framework (AI RMF) exists to close that gap — a structured methodology for identifying which AI systems your organization runs, what risks they introduce, and what controls should be in place.

This is not theoretical. The EU AI Act reaches full enforcement on high-risk systems in August 2026. Insurance carriers are adding AI liability questions to renewal forms. Boards are asking who owns AI risk. The organizations that built governance programs early will answer those questions with documentation. The rest will scramble.

In short: it is the leading U.S. framework for managing AI risk, and assessments against it are becoming a standard expectation.

Why Does AI Risk Management Matter Now?

AI adoption has outpaced AI governance in most organizations.

This gap creates real risks:

  • Operational risk — AI systems producing incorrect or biased outputs that affect business decisions
  • Regulatory risk — New AI regulations (EU AI Act, state-level AI laws) imposing compliance obligations
  • Reputational risk — Public incidents involving AI failures or misuse eroding stakeholder trust
  • Legal risk — Liability for AI-driven decisions in hiring, lending, healthcare, and other regulated domains

The NIST AI RMF addresses these risks systematically.

What Does the NIST AI RMF Cover?

The framework is organized into two main components:

AI RMF Core

The core defines four functions that form a continuous lifecycle for AI risk management:

  1. Govern — Establish policies, processes, and accountability structures for AI risk management across the organization
  2. Map — Identify and document the context, stakeholders, and potential impacts of AI systems
  3. Measure — Assess and analyze AI risks using quantitative and qualitative methods
  4. Manage — Prioritize and implement risk responses, monitor effectiveness, and communicate results

Each function contains categories and subcategories with specific outcomes — similar in structure to NIST CSF, making it familiar to organizations already using NIST frameworks.

AI RMF Profiles

Profiles allow organizations to tailor the framework to their specific context — industry, use cases, risk tolerance, and regulatory environment. NIST and industry groups publish pre-built profiles (e.g., the Generative AI Profile) that provide targeted guidance for specific AI applications.

Who Needs a NIST AI RMF Assessment?

Consulting Engagements

Organizations that are:

  • Deploying AI systems in production and need to establish governance
  • Responding to board or executive inquiries about AI risk
  • Preparing for regulatory requirements (EU AI Act, state AI laws)
  • Building an AI governance program from the ground up

Internal Audit Engagements

Internal audit teams that need to:

  • Evaluate the organization’s AI governance maturity
  • Assess AI-related risks as part of the annual audit plan
  • Provide assurance that AI systems are being managed responsibly
  • Report on AI governance to audit committees and boards

What Does an AI RMF Assessment Evaluate?

The assessment examines your organization’s AI governance program against the AI RMF’s core functions:

FunctionKey Questions
GovernDo you have AI policies, roles, and accountability structures in place?
MapHave you identified and documented your AI systems, their purposes, and their potential impacts?
MeasureAre you assessing AI risks — bias, accuracy, security, privacy — using defined methods?
ManageAre you implementing risk treatments, monitoring AI systems, and reporting on AI risk?

The assessment produces a maturity-level evaluation for each area, along with specific findings and recommendations.

How to Prepare for an AI RMF Assessment

  1. Inventory your AI systems — Document all AI and machine learning systems in use, including third-party AI services and embedded AI features in existing software
  2. Identify AI stakeholders — Determine who is responsible for AI decisions, development, deployment, and oversight
  3. Gather existing governance documentation — AI policies, acceptable use guidelines, vendor AI agreements, data governance policies
  4. Document known AI risks — Any incidents, concerns, or risks already identified related to AI systems

Many organizations are surprised by how many AI systems they actually have once a thorough inventory is conducted.

The Relationship Between AI RMF and Other Frameworks

FrameworkFocusRelationship
NIST CSFCybersecurity program managementAI RMF complements CSF for AI-specific risks
ISO 42001AI management system certificationISO 42001 aligns closely with AI RMF; both can be assessed together
EU AI ActEU AI regulationAI RMF maps well to EU AI Act requirements
NIST SP 800-53Security and privacy controlsProvides technical controls that support AI RMF outcomes

Organizations often use the AI RMF as their primary AI governance framework and crosswalk it to other requirements.

For MSPs: AI Governance Is Your Next Compliance Service Line

Your clients are deploying AI tools — Copilot, ChatGPT integrations, AI-powered CRM features, automated customer service bots. Most have no inventory of which AI systems are active, no risk classification, and no governance documentation.

NIST AI RMF assessments follow the same wholesale model as CIS Benchmark assessments: Genesis delivers the technical evaluation under your brand, you resell to clients with margin. The deliverable includes an AI system inventory, risk classification matrix, and board-ready executive summary.

The MSPs adding AI governance now are positioning themselves before demand spikes. When the EU AI Act high-risk obligations hit in August 2026, every organization with EU exposure will need documentation. Be the MSP that already has the answer.

For vCISOs: AI Risk Belongs in Your Security Program

If AI governance is not in your advisory scope yet, it will be — the next board meeting that mentions AI risk will put it there. A NIST AI RMF assessment gives you the quantified baseline: which AI systems are deployed, what risk tier they fall into, and what governance controls are missing.

Commission the assessment through a wholesale partner. Present the findings alongside your security roadmap. AI governance integrates naturally into the NIST CSF Govern function — if you are already advising on CSF alignment, AI risk management is an extension, not a separate engagement.


Genesis delivers NIST AI RMF assessments covering all four core functions — Govern, Map, Measure, Manage. Deliverables include an AI system inventory, risk classification matrix, and board-ready executive summary.

For MSPs and vCISOs: AI governance is the next compliance service line your clients will ask for. Add it to your stack now — wholesale pricing, white-label delivery, your brand on the report.

Contact us to add AI governance to your service stack.

Frequently Asked Questions

What is the NIST AI Risk Management Framework?
The NIST AI RMF provides a structured approach for organizations to identify, assess, and manage AI risks through four core functions: Govern (policies and accountability), Map (AI system identification), Measure (risk assessment), and Manage (risk treatment and monitoring).
Is the NIST AI RMF mandatory?
It is not universally mandatory, but is increasingly required for federal contractors and referenced by regulators as a best-practice standard. It is the leading U.S. framework for AI risk management.
How does the NIST AI RMF relate to other AI frameworks?
The AI RMF complements ISO 42001 (certifiable management system), maps well to EU AI Act requirements, and aligns with NIST CSF for organizations already using NIST frameworks. Many organizations use it as their primary AI governance framework.