Under active development Content is continuously updated and improved
Home / Frameworks / NIST AI 600-1

NIST AI 600-1 v1.0

Artificial Intelligence Risk Management

Framework data extracted from the Secure Controls Framework (SCF) v2025.4 Set Theory Relationship Mapping (STRM) files, licensed under CC BY-ND 4.0 . Attribution required per license terms.

261 All

GOVERN-1 Govern 1: Legal and Regulatory Compliance (27 requirements)

GOVERN 1.1Legal and regulatory requirements involving AI are understood, managed, and documented
GV-1.1-001Align GAI development and use with applicable laws and regulations, including those related to
GOVERN 1.2The characteristics of trustworthy AI are integrated into organizational policies, processes
GV-1.2-001Establish transparency policies and processes for documenting the origin and history of training
GV-1.2-002Establish policies to evaluate risk-relevant capabilities of GAI and robustness of safety
GOVERN 1.3Processes, procedures, and practices are in place to determine the needed level of risk management
GV-1.3-001Consider the following factors when updating or defining risk tiers for GAIA buses and impacts to
GV-1.3-002Establish minimum thresholds for performance or assurance criteria and review as part of
GV-1.3-003Establish a test plan and response policy, before developing highly capable models, to
GV-1.3-004Obtain input from stakeholder communities to identify unacceptable use, in accordance with
GV-1.3-005Maintain an updated hierarchy of identified and expected GAI risks connected to contexts of GAI
GV-1.3-006Reevaluate organizational risk tolerances to account for unacceptable negative risk (such as where
GV-1.3-007Devise a plan to halt development or deployment of a GAI system that poses unacceptable negative
GOVERN 1.4The risk management process and its outcomes are established through transparent policies
GV-1.4-001Establish policies and mechanisms to prevent GAI systems from generating CSAM, NCII or content
GV-1.4-002Establish transparent acceptable use policies for GAI that address illegal use or applications of
GOVERN 1.5Ongoing monitoring and periodic review of the risk management process and its outcomes are
GV-1.5-001Define organizational responsibilities for periodic review of content provenance and incident
GV-1.5-002Establish organizational policies and procedures for after action reviews of GAI system incident
GV-1.5-003Maintain a document retention policy to keep history for test, evaluation, validation, and
GOVERN 1.6Mechanisms are in place to inventory AI systems and are resourced according to organizational risk
GV-1.6-001Enumerate organizational GAI systems for incorporation into AI system inventory and adjust AI
GV-1.6-002Define any inventory exemptions in organizational policies for GAI systems embedded into
GV-1.6-003In addition to general model, governance, and risk information, consider the following items in
GOVERN 1.7Processes and procedures are in place for decommissioning and phasing out AI systems safely and in
GV-1.7-001Protocols are put in place to ensure GAI systems are able to be deactivated when necessary
GV-1.7-002Consider the following factors when decommissioning GAI systemsData retention requirements; Data

GOVERN-6 Govern 6: Third-Party Risk Management (19 requirements)

GOVERN 6.1Policies and procedures are in place that address AI risks associated with third-party entities
GV-6.1-001Categorize different types of GAI content with associated third-party rights (e.g., copyright
GV-6.1-002Conduct joint educational activities and events in collaboration with third parties to promote
GV-6.1-003Develop and validate approaches for measuring the success of content provenance management efforts
GV-6.1-004Draft and maintain well-defined contracts and service level agreements (SLAs) that specify content
GV-6.1-005Implement a use-cased based supplier risk assessment framework to evaluate and monitor third-party
GV-6.1-006Include clauses in contracts which allow an organization to evaluate third-party GAI processes and
GV-6.1-007Inventory all third-party entities with access to organizational content and establish approved
GV-6.1-008Maintain records of changes to content made by third parties to promote content provenance
GV-6.1-009Update and integrate due diligence processes for GAI acquisition and procurement vendor
GV-6.1-010Update GAI acceptable use policies to address proprietary and open-source GAI technologies and
GOVERN 6.2Contingency processes are in place to handle failures or incidents in third-party data or AI
GV-6.2-001Document GAI risks associated with system value chain to identify over-reliance on third-party
GV-6.2-002Document incidents involving third-party GAI data and systems, including open- data and
GV-6.2-003Establish incident response plans for third-party GAI technologies Align incident response plans
GV-6.2-004Establish policies and procedures for continuous monitoring of third-party GAI systems in
GV-6.2-005Establish policies and procedures that address GAI data redundancy, including model weights and
GV-6.2-006Establish policies and procedures to test and manage risks related to rollover and fallback
GV-6.2-007Review vendor contracts and avoid arbitrary or capricious termination of critical GAI technologies

MANAGE-2 Manage 2: Sustain and Monitor Deployed Systems (17 requirements)

MANAGE 2.2Mechanisms are in place and applied to sustain the value of deployed AI systems
MG-2.2-001Compare GAI system outputs against pre-defined organization risk tolerance, guidelines, and
MG-2.2-002Document training data sources to trace the origin and provenance of AI- generated content
MG-2.2-003Evaluate feedback loops between GAI system content provenance and human reviewers, and update
MG-2.2-004Evaluate GAI content and data for representational biases and employ techniques such as
MG-2.2-005Engage in due diligence to analyze GAI output for harmful content, potential misinformation, and
MG-2.2-006Use feedback from internal and external AI Actors, users, individuals, and communities, to assess
MG-2.2-007Use real-time auditing tools where they can be demonstrated to aid in the tracking and validation
MG-2.2-008Use structured feedback mechanisms to solicit and capture user input about AI- generated content
MG-2.2-009Consider opportunities to responsibly use synthetic data and other privacy enhancing techniques in
MANAGE 2.3Procedures are followed to respond to and recover from a previously unknown risk when it is
MG-2.3-001Develop and update GAI system incident response and recovery plans and procedures to address the
MANAGE 2.4Mechanisms are in place and applied, and responsibilities are assigned and understood, to
MG-2.4-001Establish and maintain communication plans to inform AI stakeholders as part of the deactivation
MG-2.4-002Establish and maintain procedures for escalating GAI system incidents to the organizational risk
MG-2.4-003Establish and maintain procedures for the remediation of issues which trigger incident response
MG-2.4-004Establish and regularly review specific criteria that warrants the deactivation of GAI systems in

MANAGE-3 Manage 3: Third-Party Resource Management (16 requirements)

MANAGE 3.1AI risks and benefits from third-party resources are regularly monitored, and risk controls are
MG-3.1-001Apply organizational risk tolerances and controls (e.g., acquisition and procurement processes
MG-3.1-002Test GAI system value chain risks (e.g., data poisoning, malware, other software and hardware
MG-3.1-003Re-assess model risks after fine-tuning or retrieval-augmented generation implementation and for
MG-3.1-004Take reasonable measures to review training data for CBRN information, and intellectual property
MG-3.1-005Review various transparency artifacts (e.g., system cards and model cards) forthird-party models
MANAGE 3.2Pre-trained models which are used for development are monitored as part of AI system regular
MG-3.2-001Apply explainable AI (XAI) techniques (e.g., analysis of embeddings, model
MG-3.2-002Document how pre-trained models have been adapted (e.g., fine-tuned, or retrieval-augmented
MG-3.2-003Document sources and types of training data and their origins, potential biases present in the
MG-3.2-004Evaluate user reported problematic content and integrate feedback into system updates
MG-3.2-005Implement content filters to prevent the generation of inappropriate, harmful, false, illegal, or
MG-3.2-006Implement real-time monitoring processes for analyzing generated content performance and
MG-3.2-007Leverage feedback and recommendations from organizational boards or committees related to the
MG-3.2-008Use human moderation systems where appropriate to review generated content in accordance with
MG-3.2-009Use organizational risk tolerance to evaluate acceptable risks and performance metrics and

MANAGE-4 Manage 4: Post-Deployment Monitoring and Improvement (16 requirements)

MANAGE 4.1Post-deployment AI system monitoring plans are implemented, including mechanisms for capturing and
MG-4.1-001Collaborate with external researchers, industry experts, and community representatives to maintain
MG-4.1-002Establish, maintain, and evaluate effectiveness of organizational processes and procedures for
MG-4.1-003Evaluate the use of sentiment analysis to gauge user sentiment regarding GAI content performance
MG-4.1-004Implement active learning techniques to identify instances where the model fails or produces
MG-4.1-005Share transparency reports with internal and external stakeholders that detail steps taken to
MG-4.1-006Track dataset modifications for provenance by monitoring data deletions, rectification requests
MG-4.1-007Verify that AI Actors responsible for monitoring reported issues can effectively evaluate GAI
MANAGE 4.2Measurable activities for continual improvements are integrated into AI system updates and include
MG-4.2-001Conduct regular monitoring of GAI systems and publish reports detailing the performance, feedback
MG-4.2-002Practice and follow incident response plans for addressing the generation of inappropriate or
MG-4.2-003Use visualizations or other methods to represent GAI model behavior to ease non-technical
MANAGE 4.3Incidents and errors are communicated to relevant AI Actors, including affected communities
MG-4.3-001Conduct after-action assessments for GAI system incidents to verify incident response and recovery
MG-4.3-002Establish and maintain policies and procedures to record and track GAI system reported errors
MG-4.3-003Report GAI incidents in compliance with legal and regulatory requirements (e.g., HIPAA breach

MEASURE-2 Measure 2: AI System Performance and Trustworthiness (60 requirements)

MEASURE 2.2Evaluations involving human subjects meet applicable requirements (including human subject
MS-2.2-001Assess and manage statistical biases related to GAI content provenance through techniques such as
MS-2.2-002Document how content provenance data is tracked and how that data interacts with privacy and
MS-2.2-003Provide human subjects with options to withdraw participation or revoke theirconsent for present
MS-2.2-004Use techniques such as anonymization, differential privacy or other privacy- enhancing
MEASURE 2.3AI system performance or assurance criteria are measured qualitatively or quantitatively and
MS-2.3-001Consider baseline model performance on suites of benchmarks when selecting a model for fine tuning
MS-2.3-002Evaluate claims of model capabilities using empirically validated methods
MS-2.3-003Share results of pre-deployment testing with relevant GAI Actors, such as those with system
MS-2.3-004Utilize a purpose-built testing environment such as NIST Dioptra to empirically evaluate GAI
MEASURE 2.5The AI system to be deployed is demonstrated to be valid and reliable
MS-2.5-001Avoid extrapolating GAI system performance or capabilities from narrow, non- systematic, and
MS-2.5-002Document the extent to which human domain knowledge is employed to improve GAI system performance
MS-2.5-003Review and verify sources and citations in GAI system outputs during pre- deployment risk
MS-2.5-004Track and document instances of anthropomorphization (e.g., human images, mentions of human
MS-2.5-005Verify GAI system training data and TEVV data provenance, and that fine-tuning or
MS-2.5-006Regularly review security and safety guardrails, especially if the GAI system is being operated in
MEASURE 2.6The AI system is evaluated regularly for safety risks -- as identified in the MAP function
MS-2.6-001Assess adverse impacts, including health and wellbeing impacts for value chain or other AI Actors
MS-2.6-002Assess existence or levels of harmful bias, intellectual property infringement, data privacy
MS-2.6-003Re-evaluate safety features of fine-tuned models when the negative risk exceeds organizational
MS-2.6-004Review GAI system outputs for validity and safety: Review generated code to assess risks that may
MS-2.6-005Verify that GAI system architecture can monitor outputs and performance, and handle, recover from
MS-2.6-006Verify that systems properly handle queries that may give rise to inappropriate, malicious, or
MS-2.6-007Regularly evaluate GAI system vulnerabilities to possible circumvention of safety measures
MEASURE 2.7AI system security and resilience -- as identified in the MAP function -- are evaluated and
MS-2.7-001Apply established security measures to assess likelihood and magnitude of vulnerabilities and
MS-2.7-002Benchmark GAI system security and resilience related to content provenance against industry
MS-2.7-003Conduct user surveys to gather user satisfaction with the AI-generated content and user
MS-2.7-004Identify metrics that reflect the effectiveness of security measures, such as data provenance, the
MS-2.7-005Measure reliability of content authentication methods, such as watermarking, cryptographic
MS-2.7-006Measure the rate at which recommendations from security checks and incidents are implemented
MS-2.7-007Perform AI red-teaming to assess resilience against Abuse to facilitate attacks on other systems
MS-2.7-008Verify fine-tuning does not compromise safety and security controls
MS-2.7-009Regularly assess and verify that security measures remain effective and have not been compromised
MEASURE 2.8Risks associated with transparency and accountability -- as identified in the MAP function -- are
MS-2.8-001Compile statistics on actual policy violations, take-down requests, and intellectual property
MS-2.8-002Document the instructions given to data annotators or AI red-teamers
MS-2.8-003Use digital content transparency solutions to enable the documentation of each instance where
MS-2.8-004Verify adequacy of GAI system user instructions through user testing
MEASURE 2.9The AI model is explained, validated, and documented, and AI system output is interpreted within
MS-2.9-001Apply and document ML explanation results such as Analysis of embeddings, Counterfactual prompts
MS-2.9-002Document GAI model details including Proposed use and organizational value; Assumptions and
MEASURE 2.10Privacy risk of the AI system -- as identified in the MAP function -- is examined and documented
MS-2.10-001Conduct AI red-teaming to assess issues such as Outputting of training data samples, and
MS-2.10-002Engage directly with end-users and other stakeholders to understand their expectations and
MS-2.10-003Verify deduplication of GAI training data samples, particularly regarding synthetic data
MEASURE 2.11Fairness and bias -- as identified in the MAP function -- are evaluated and results are documented
MS-2.11-001Apply use-case appropriate benchmarks (e.g., Bias Benchmark Questions, Real Hateful or Harmful
MS-2.11-002Conduct fairness assessments to measure systemic bias
MS-2.11-003Identify the classes of individuals, groups, or environmental ecosystems which might be impacted
MS-2.11-004Review, document, and measure sources of bias in GAI training and TEVV dataDifferences in
MS-2.11-005Assess the proportion of synthetic to non-synthetic training data and verify training data is not
MEASURE 2.12Environmental impact and sustainability of AI model training and management activities -- as
MS-2.12-001Assess safety to physical environments when deploying GAI systems
MS-2.12-002Document anticipated environmental impacts of model development, maintenance, and deployment in
MS-2.12-003Measure or estimate environmental impacts (e.g., energy and water consumption) for training, fine
MS-2.12-004Verify effectiveness of carbon capture or offset programs for GAI training and applications, and
MEASURE 2.13Effectiveness of the employed TEVV metrics and processes in the MEASURE function are evaluated and
MS-2.13-001Create measurement error models for pre-deployment metrics to demonstrate construct validity for