NIST AI 600-1 v1.0
Artificial Intelligence Risk Management
Framework data extracted from the Secure Controls Framework (SCF) v2025.4 Set Theory Relationship Mapping (STRM) files, licensed under CC BY-ND 4.0 . Attribution required per license terms.
261 All
GOVERN-1 — Govern 1: Legal and Regulatory Compliance (27 requirements)
GOVERN 1.1Legal and regulatory requirements involving AI are understood, managed, and documented
GV-1.1-001Align GAI development and use with applicable laws and regulations, including those related to
GOVERN 1.2The characteristics of trustworthy AI are integrated into organizational policies, processes
GV-1.2-001Establish transparency policies and processes for documenting the origin and history of training
GV-1.2-002Establish policies to evaluate risk-relevant capabilities of GAI and robustness of safety
GOVERN 1.3Processes, procedures, and practices are in place to determine the needed level of risk management
GV-1.3-001Consider the following factors when updating or defining risk tiers for GAIA buses and impacts to
GV-1.3-002Establish minimum thresholds for performance or assurance criteria and review as part of
GV-1.3-003Establish a test plan and response policy, before developing highly capable models, to
GV-1.3-004Obtain input from stakeholder communities to identify unacceptable use, in accordance with
GV-1.3-005Maintain an updated hierarchy of identified and expected GAI risks connected to contexts of GAI
GV-1.3-006Reevaluate organizational risk tolerances to account for unacceptable negative risk (such as where
GV-1.3-007Devise a plan to halt development or deployment of a GAI system that poses unacceptable negative
GOVERN 1.4The risk management process and its outcomes are established through transparent policies
GV-1.4-001Establish policies and mechanisms to prevent GAI systems from generating CSAM, NCII or content
GV-1.4-002Establish transparent acceptable use policies for GAI that address illegal use or applications of
GOVERN 1.5Ongoing monitoring and periodic review of the risk management process and its outcomes are
GV-1.5-001Define organizational responsibilities for periodic review of content provenance and incident
GV-1.5-002Establish organizational policies and procedures for after action reviews of GAI system incident
GV-1.5-003Maintain a document retention policy to keep history for test, evaluation, validation, and
GOVERN 1.6Mechanisms are in place to inventory AI systems and are resourced according to organizational risk
GV-1.6-001Enumerate organizational GAI systems for incorporation into AI system inventory and adjust AI
GV-1.6-002Define any inventory exemptions in organizational policies for GAI systems embedded into
GV-1.6-003In addition to general model, governance, and risk information, consider the following items in
GOVERN 1.7Processes and procedures are in place for decommissioning and phasing out AI systems safely and in
GV-1.7-001Protocols are put in place to ensure GAI systems are able to be deactivated when necessary
GV-1.7-002Consider the following factors when decommissioning GAI systemsData retention requirements; Data
GOVERN-2 — Govern 2: Roles, Responsibilities, and Communication (6 requirements)
GOVERN 2.1Roles and responsibilities and lines of communication related to mapping, measuring, and managing
GV-2.1-001Establish organizational roles, policies, and procedures for communicating GAI incidents and
GV-2.1-002Establish procedures to engage teams for GAI system incident response with diverse composition and
GV-2.1-003Establish processes to verify the AI Actors conducting GAI incident response tasks demonstrate and
GV-2.1-004When systems may raise national security risks, involve national security professionals in
GV-2.1-005Create mechanisms to provide protections for whistleblowers who report, based on reasonable
GOVERN-3 — Govern 3: Human-AI Configuration and Oversight (6 requirements)
GOVERN 3.2Policies and procedures are in place to define and differentiate roles and responsibilities for
GV-3.2-001Policies are in place to bolster oversight of GAI systems with independent evaluations or
GV-3.2-002Consider adjustment of organizational roles and components across lifecycle stages of large or
GV-3.2-003Define acceptable use policies for GAI interfaces, modalities, and human-AI configurations (i.e.
GV-3.2-004Establish policies for user feedback mechanisms for GAI systems which include thorough
GV-3.2-005Engage in threat modeling to anticipate potential risks from GAI systems
GOVERN-4 — Govern 4: Organizational Practices and Culture (12 requirements)
GOVERN 4.1Organizational policies and practices are in place to foster a critical thinking and safety-first
GV-4.1-001Establish policies and procedures that address continual improvement processes for GAI risk
GV-4.1-002Establish policies, procedures, and processes detailing risk measurement in context of use with
GV-4.1-003Establish policies, procedures, and processes for oversight functions (e.g., senior leadership
GOVERN 4.2Organizational teams document the risks and potential impacts of the AI technology they design
GV-4.2-001Establish terms of use and terms of service for GAI systems
GV-4.2-002Include relevant AI Actors in the GAI system risk identification process
GV-4.2-003Verify that downstream GAI system impacts (such as the use of third-party plugins) are included in
GOVERN 4.3Organizational practices are in place to enable AI testing, identification of incidents, and
GV-4.3-001Establish policies for measuring the effectiveness of employed content provenance methodologies
GV-4.3-002Establish organizational practices to identify the minimum set of criteria necessary for GAI
GV-4.3-003Verify information sharing and feedback mechanisms among individuals and organizations regarding
GOVERN-5 — Govern 5: External Stakeholder Engagement (3 requirements)
GOVERN 5.1Organizational policies and practices are in place to collect, consider, prioritize, and integrate
GV-5.1-001Allocate time and resources for outreach, feedback, and recourse processes in GAI system
GV-5.1-002Document interactions with GAI systems to users prior to interactive activities, particularly in
GOVERN-6 — Govern 6: Third-Party Risk Management (19 requirements)
GOVERN 6.1Policies and procedures are in place that address AI risks associated with third-party entities
GV-6.1-001Categorize different types of GAI content with associated third-party rights (e.g., copyright
GV-6.1-002Conduct joint educational activities and events in collaboration with third parties to promote
GV-6.1-003Develop and validate approaches for measuring the success of content provenance management efforts
GV-6.1-004Draft and maintain well-defined contracts and service level agreements (SLAs) that specify content
GV-6.1-005Implement a use-cased based supplier risk assessment framework to evaluate and monitor third-party
GV-6.1-006Include clauses in contracts which allow an organization to evaluate third-party GAI processes and
GV-6.1-007Inventory all third-party entities with access to organizational content and establish approved
GV-6.1-008Maintain records of changes to content made by third parties to promote content provenance
GV-6.1-009Update and integrate due diligence processes for GAI acquisition and procurement vendor
GV-6.1-010Update GAI acceptable use policies to address proprietary and open-source GAI technologies and
GOVERN 6.2Contingency processes are in place to handle failures or incidents in third-party data or AI
GV-6.2-001Document GAI risks associated with system value chain to identify over-reliance on third-party
GV-6.2-002Document incidents involving third-party GAI data and systems, including open- data and
GV-6.2-003Establish incident response plans for third-party GAI technologies Align incident response plans
GV-6.2-004Establish policies and procedures for continuous monitoring of third-party GAI systems in
GV-6.2-005Establish policies and procedures that address GAI data redundancy, including model weights and
GV-6.2-006Establish policies and procedures to test and manage risks related to rollover and fallback
GV-6.2-007Review vendor contracts and avoid arbitrary or capricious termination of critical GAI technologies
MANAGE-1 — Manage 1: Risk Prioritization and Response (3 requirements)
MANAGE 1.3Responses to the AI risks deemed high priority, as identified by the MAP function, are developed
MG-1.3-001Document trade-offs, decision processes, and relevant measurement and feedback results for risks
MG-1.3-002Monitor the robustness and effectiveness of risk controls and mitigation plans (e.g., via
MANAGE-2 — Manage 2: Sustain and Monitor Deployed Systems (17 requirements)
MANAGE 2.2Mechanisms are in place and applied to sustain the value of deployed AI systems
MG-2.2-001Compare GAI system outputs against pre-defined organization risk tolerance, guidelines, and
MG-2.2-002Document training data sources to trace the origin and provenance of AI- generated content
MG-2.2-003Evaluate feedback loops between GAI system content provenance and human reviewers, and update
MG-2.2-004Evaluate GAI content and data for representational biases and employ techniques such as
MG-2.2-005Engage in due diligence to analyze GAI output for harmful content, potential misinformation, and
MG-2.2-006Use feedback from internal and external AI Actors, users, individuals, and communities, to assess
MG-2.2-007Use real-time auditing tools where they can be demonstrated to aid in the tracking and validation
MG-2.2-008Use structured feedback mechanisms to solicit and capture user input about AI- generated content
MG-2.2-009Consider opportunities to responsibly use synthetic data and other privacy enhancing techniques in
MANAGE 2.3Procedures are followed to respond to and recover from a previously unknown risk when it is
MG-2.3-001Develop and update GAI system incident response and recovery plans and procedures to address the
MANAGE 2.4Mechanisms are in place and applied, and responsibilities are assigned and understood, to
MG-2.4-001Establish and maintain communication plans to inform AI stakeholders as part of the deactivation
MG-2.4-002Establish and maintain procedures for escalating GAI system incidents to the organizational risk
MG-2.4-003Establish and maintain procedures for the remediation of issues which trigger incident response
MG-2.4-004Establish and regularly review specific criteria that warrants the deactivation of GAI systems in
MANAGE-3 — Manage 3: Third-Party Resource Management (16 requirements)
MANAGE 3.1AI risks and benefits from third-party resources are regularly monitored, and risk controls are
MG-3.1-001Apply organizational risk tolerances and controls (e.g., acquisition and procurement processes
MG-3.1-002Test GAI system value chain risks (e.g., data poisoning, malware, other software and hardware
MG-3.1-003Re-assess model risks after fine-tuning or retrieval-augmented generation implementation and for
MG-3.1-004Take reasonable measures to review training data for CBRN information, and intellectual property
MG-3.1-005Review various transparency artifacts (e.g., system cards and model cards) forthird-party models
MANAGE 3.2Pre-trained models which are used for development are monitored as part of AI system regular
MG-3.2-001Apply explainable AI (XAI) techniques (e.g., analysis of embeddings, model
MG-3.2-002Document how pre-trained models have been adapted (e.g., fine-tuned, or retrieval-augmented
MG-3.2-003Document sources and types of training data and their origins, potential biases present in the
MG-3.2-004Evaluate user reported problematic content and integrate feedback into system updates
MG-3.2-005Implement content filters to prevent the generation of inappropriate, harmful, false, illegal, or
MG-3.2-006Implement real-time monitoring processes for analyzing generated content performance and
MG-3.2-007Leverage feedback and recommendations from organizational boards or committees related to the
MG-3.2-008Use human moderation systems where appropriate to review generated content in accordance with
MG-3.2-009Use organizational risk tolerance to evaluate acceptable risks and performance metrics and
MANAGE-4 — Manage 4: Post-Deployment Monitoring and Improvement (16 requirements)
MANAGE 4.1Post-deployment AI system monitoring plans are implemented, including mechanisms for capturing and
MG-4.1-001Collaborate with external researchers, industry experts, and community representatives to maintain
MG-4.1-002Establish, maintain, and evaluate effectiveness of organizational processes and procedures for
MG-4.1-003Evaluate the use of sentiment analysis to gauge user sentiment regarding GAI content performance
MG-4.1-004Implement active learning techniques to identify instances where the model fails or produces
MG-4.1-005Share transparency reports with internal and external stakeholders that detail steps taken to
MG-4.1-006Track dataset modifications for provenance by monitoring data deletions, rectification requests
MG-4.1-007Verify that AI Actors responsible for monitoring reported issues can effectively evaluate GAI
MANAGE 4.2Measurable activities for continual improvements are integrated into AI system updates and include
MG-4.2-001Conduct regular monitoring of GAI systems and publish reports detailing the performance, feedback
MG-4.2-002Practice and follow incident response plans for addressing the generation of inappropriate or
MG-4.2-003Use visualizations or other methods to represent GAI model behavior to ease non-technical
MANAGE 4.3Incidents and errors are communicated to relevant AI Actors, including affected communities
MG-4.3-001Conduct after-action assessments for GAI system incidents to verify incident response and recovery
MG-4.3-002Establish and maintain policies and procedures to record and track GAI system reported errors
MG-4.3-003Report GAI incidents in compliance with legal and regulatory requirements (e.g., HIPAA breach
MAP-1 — Map 1: Context and Intended Use (8 requirements)
MAP 1.1Intended purposes, potentially beneficial uses, context specific laws, norms and expectations, and
MP-1.1-001When identifying intended purposes, consider factors such as internal vs. external use, narrow vs.
MP-1.1-002Determine and document the expected and acceptable GAI system context of use in collaboration with
MP-1.1-003Document risk measurement plans to address identified risks
MP-1.1-004Identify and document foreseeable illegal uses or applications of the GAI system that surpass
MAP 1.2Interdisciplinary AI Actors, competencies, skills, and capacities for establishing context reflect
MP-1.2-001Establish and empower interdisciplinary teams that reflect a wide range of capabilities
MP-1.2-002Verify that data or benchmarks used in risk measurement, and users, participants, or subjects
MAP-2 — Map 2: Tasks, Methods, and Scientific Integrity (12 requirements)
MAP 2.1The specific tasks and methods used to implement the tasks that the AI system will support are
MP-2.1-001Establish known assumptions and practices for determining data origin and content lineage, for
MP-2.1-002Institute test and evaluation for data and content flows within the GAI system, including but not
MAP 2.2Information about the AI system's knowledge limits and how system output may be utilized and
MP-2.2-001Identify and document how the system relies on upstream data sources, including for content
MP-2.2-002Observe and analyze how the GAI system interacts with external networks, and identify any
MAP 2.3Scientific integrity and TEVV considerations are identified and documented, including those
MP-2.3-001Assess the accuracy, quality, reliability, and authenticity of GAI output by comparing it to a set
MP-2.3-002Review and document accuracy, representativeness, relevance, suitability of data used at different
MP-2.3-003Deploy and document fact-checking techniques to verify the accuracy and veracity of information
MP-2.3-004Develop and implement testing techniques to identify GAI produced content (e.g., synthetic media)
MP-2.3-005Implement plans for GAI systems to undergo regular adversarial testing to identify vulnerabilities
MAP-3 — Map 3: Operator and Practitioner Proficiency (7 requirements)
MAP 3.4Processes for operator and practitioner proficiency with AI system performance and trustworthiness
MP-3.4-001Evaluate whether GAI operators and end-users can accurately understand content lineage and origin
MP-3.4-002Adapt existing training programs to include modules on digital content transparency
MP-3.4-003Develop certification programs that test proficiency in managing GAI risks and interpreting
MP-3.4-004Delineate human proficiency tests from tests of GAI capabilities
MP-3.4-005Implement systems to continually monitor and track the outcomes of human-GAI configurations for
MP-3.4-006Involve the end-users, practitioners, and operators in GAI system in prototyping and testing
MAP-4 — Map 4: Technology and Legal Risk Mapping (11 requirements)
MAP 4.1Approaches for mapping AI technology and legal risks of its components -- including the use of
MP-4.1-001Conduct periodic monitoring of AI-generated content for privacy risks; address any possible
MP-4.1-002Implement processes for responding to potential intellectual property infringement claims or other
MP-4.1-003Connect new GAI policies, procedures, and processes to existing model, data, software development
MP-4.1-004Document training data curation policies, to the extent possible and according to applicable laws
MP-4.1-005Establish policies for collection, retention, and minimum quality of data, in consideration of the
MP-4.1-006Implement policies and practices defining how third-party intellectual property and training data
MP-4.1-007Re-evaluate models that were fine-tuned or enhanced on top of third-party models
MP-4.1-008Re-evaluate risks when adapting GAI models to new domains
MP-4.1-009Leverage approaches to detect the presence of PII or sensitive data in generated output text
MP-4.1-010Conduct appropriate diligence on training data use to assess intellectual property, and privacy
MAP-5 — Map 5: Impact Identification and Documentation (10 requirements)
MAP 5.1Likelihood and magnitude of each identified impact (both potentially beneficial and harmful) based
MP-5.1-001Apply TEVV practices for content provenance (e.g., probing a system's synthetic data generation
MP-5.1-002Identify potential content provenance harms of GAI, such as misinformation or disinformation
MP-5.1-003Consider disclosing use of GAI to end users in relevant contexts, while considering the objective
MP-5.1-004Prioritize GAI structured public feedback processes based on risk assessment estimates
MP-5.1-005Conduct adversarial role-playing exercises, GAI red-teaming, or chaos testing to identify
MP-5.1-006Profile threats and negative impacts arising from GAI systems interacting with, manipulating, or
MAP 5.2Practices and personnel for supporting regular engagement with relevant AI Actors and integrating
MP-5.2-001Determine context-based measures to identify if new impacts are present due to the GAI system
MP-5.2-002Plan regular engagements with AI Actors responsible for inputs to GAI systems, including
MEASURE-1 — Measure 1: Risk Measurement Approaches and Metrics (14 requirements)
MEASURE 1.1Approaches and metrics for measurement of AI risks enumerated during the MAP function are selected
MS-1.1-001Employ methods to trace the origin and modifications of digital content
MS-1.1-002Integrate tools designed to analyze content provenance and detect data anomalies, verify the
MS-1.1-003Disaggregate evaluation metrics by demographic factors to identify any discrepancies in how
MS-1.1-004Develop a suite of metrics to evaluate structured public feedback exercises informed by
MS-1.1-005Evaluate novel methods and technologies for the measurement of GAI-related risks including in
MS-1.1-006Implement continuous monitoring of GAI system impacts to identify whether GAI outputs are
MS-1.1-007Evaluate the quality and integrity of data used in training and the provenance of AI-generated
MS-1.1-008Define use cases, contexts of use, capabilities, and negative impacts where structured human
MS-1.1-009Track and document risks or opportunities related to all GAI risks that cannot be measured
MEASURE 1.3Internal experts who did not serve as front-line developers for the system and/or independent
MS-1.3-001Define relevant groups of interest (e.g., demographic groups, subject matter experts, experience
MS-1.3-002Engage in internal and external evaluations, GAI red-teaming, impact assessments, or other
MS-1.3-003Verify those conducting structured human feedback exercises are not directly involved in system
MEASURE-2 — Measure 2: AI System Performance and Trustworthiness (60 requirements)
MEASURE 2.2Evaluations involving human subjects meet applicable requirements (including human subject
MS-2.2-001Assess and manage statistical biases related to GAI content provenance through techniques such as
MS-2.2-002Document how content provenance data is tracked and how that data interacts with privacy and
MS-2.2-003Provide human subjects with options to withdraw participation or revoke theirconsent for present
MS-2.2-004Use techniques such as anonymization, differential privacy or other privacy- enhancing
MEASURE 2.3AI system performance or assurance criteria are measured qualitatively or quantitatively and
MS-2.3-001Consider baseline model performance on suites of benchmarks when selecting a model for fine tuning
MS-2.3-002Evaluate claims of model capabilities using empirically validated methods
MS-2.3-003Share results of pre-deployment testing with relevant GAI Actors, such as those with system
MS-2.3-004Utilize a purpose-built testing environment such as NIST Dioptra to empirically evaluate GAI
MEASURE 2.5The AI system to be deployed is demonstrated to be valid and reliable
MS-2.5-001Avoid extrapolating GAI system performance or capabilities from narrow, non- systematic, and
MS-2.5-002Document the extent to which human domain knowledge is employed to improve GAI system performance
MS-2.5-003Review and verify sources and citations in GAI system outputs during pre- deployment risk
MS-2.5-004Track and document instances of anthropomorphization (e.g., human images, mentions of human
MS-2.5-005Verify GAI system training data and TEVV data provenance, and that fine-tuning or
MS-2.5-006Regularly review security and safety guardrails, especially if the GAI system is being operated in
MEASURE 2.6The AI system is evaluated regularly for safety risks -- as identified in the MAP function
MS-2.6-001Assess adverse impacts, including health and wellbeing impacts for value chain or other AI Actors
MS-2.6-002Assess existence or levels of harmful bias, intellectual property infringement, data privacy
MS-2.6-003Re-evaluate safety features of fine-tuned models when the negative risk exceeds organizational
MS-2.6-004Review GAI system outputs for validity and safety: Review generated code to assess risks that may
MS-2.6-005Verify that GAI system architecture can monitor outputs and performance, and handle, recover from
MS-2.6-006Verify that systems properly handle queries that may give rise to inappropriate, malicious, or
MS-2.6-007Regularly evaluate GAI system vulnerabilities to possible circumvention of safety measures
MEASURE 2.7AI system security and resilience -- as identified in the MAP function -- are evaluated and
MS-2.7-001Apply established security measures to assess likelihood and magnitude of vulnerabilities and
MS-2.7-002Benchmark GAI system security and resilience related to content provenance against industry
MS-2.7-003Conduct user surveys to gather user satisfaction with the AI-generated content and user
MS-2.7-004Identify metrics that reflect the effectiveness of security measures, such as data provenance, the
MS-2.7-005Measure reliability of content authentication methods, such as watermarking, cryptographic
MS-2.7-006Measure the rate at which recommendations from security checks and incidents are implemented
MS-2.7-007Perform AI red-teaming to assess resilience against Abuse to facilitate attacks on other systems
MS-2.7-008Verify fine-tuning does not compromise safety and security controls
MS-2.7-009Regularly assess and verify that security measures remain effective and have not been compromised
MEASURE 2.8Risks associated with transparency and accountability -- as identified in the MAP function -- are
MS-2.8-001Compile statistics on actual policy violations, take-down requests, and intellectual property
MS-2.8-002Document the instructions given to data annotators or AI red-teamers
MS-2.8-003Use digital content transparency solutions to enable the documentation of each instance where
MS-2.8-004Verify adequacy of GAI system user instructions through user testing
MEASURE 2.9The AI model is explained, validated, and documented, and AI system output is interpreted within
MS-2.9-001Apply and document ML explanation results such as Analysis of embeddings, Counterfactual prompts
MS-2.9-002Document GAI model details including Proposed use and organizational value; Assumptions and
MEASURE 2.10Privacy risk of the AI system -- as identified in the MAP function -- is examined and documented
MS-2.10-001Conduct AI red-teaming to assess issues such as Outputting of training data samples, and
MS-2.10-002Engage directly with end-users and other stakeholders to understand their expectations and
MS-2.10-003Verify deduplication of GAI training data samples, particularly regarding synthetic data
MEASURE 2.11Fairness and bias -- as identified in the MAP function -- are evaluated and results are documented
MS-2.11-001Apply use-case appropriate benchmarks (e.g., Bias Benchmark Questions, Real Hateful or Harmful
MS-2.11-002Conduct fairness assessments to measure systemic bias
MS-2.11-003Identify the classes of individuals, groups, or environmental ecosystems which might be impacted
MS-2.11-004Review, document, and measure sources of bias in GAI training and TEVV dataDifferences in
MS-2.11-005Assess the proportion of synthetic to non-synthetic training data and verify training data is not
MEASURE 2.12Environmental impact and sustainability of AI model training and management activities -- as
MS-2.12-001Assess safety to physical environments when deploying GAI systems
MS-2.12-002Document anticipated environmental impacts of model development, maintenance, and deployment in
MS-2.12-003Measure or estimate environmental impacts (e.g., energy and water consumption) for training, fine
MS-2.12-004Verify effectiveness of carbon capture or offset programs for GAI training and applications, and
MEASURE 2.13Effectiveness of the employed TEVV metrics and processes in the MEASURE function are evaluated and
MS-2.13-001Create measurement error models for pre-deployment metrics to demonstrate construct validity for
MEASURE-3 — Measure 3: Risk Tracking and Feedback (8 requirements)
MEASURE 3.2Risk tracking approaches are considered for settings where AI risks are difficult to assess using
MS-3.2-001Establish processes for identifying emergent GAI system risks including consulting with external
MEASURE 3.3Feedback processes for end users and impacted communities to report problems and appeal system
MS-3.3-001Conduct impact assessments on how AI-generated content might affect different social, economic
MS-3.3-002Conduct studies to understand how end users perceive and interact with GAI content and
MS-3.3-003Evaluate potential biases and stereotypes that could emerge from the AI- generated content using
MS-3.3-004Provide input for training materials about the capabilities and limitations of GAI systems related
MS-3.3-005Record and integrate structured feedback about content provenance from operators, users, and
MEASURE-4 — Measure 4: Deployment and Lifecycle Measurement (6 requirements)
MEASURE 4.2Measurement results regarding AI system trustworthiness in deployment context(s) and across the AI
MS-4.2-001Conduct adversarial testing at a regular cadence to map and measure GAI risks, including tests to
MS-4.2-002Evaluate GAI system performance in real-world scenarios to observe its behavior in practical
MS-4.2-003Implement interpretability and explainability methods to evaluate GAI system decisions and verify
MS-4.2-004Monitor and document instances where human operators or other systems override the GAI's decisions
MS-4.2-005Verify and document the incorporation of results of structured public feedback exercises into