Under active development Content is continuously updated and improved · Last updated Feb 18, 2026, 2:55 AM UTC

GOVERN-4.2Organizational teams document the risks and potential impacts of the AI technology they design, develop, deploy, evaluate and use, and communicate about the impacts more broadly.

>Control Description

Organizational teams document the risks and potential impacts of the AI technology they design, develop, deploy, evaluate and use, and communicate about the impacts more broadly.

>About

Impact assessments are one approach for driving responsible technology development practices. And, within a specific use case, these assessments can provide a high-level structure for organizations to frame risks of a given algorithm or deployment. Impact assessments can also serve as a mechanism for organizations to articulate risks and generate documentation for managing and oversight activities when harms do arise.

Impact assessments may:

  • be applied at the beginning of a process but also iteratively and regularly since goals and outcomes can evolve over time.
  • include perspectives from AI actors, including operators, users, and potentially impacted communities (including historically marginalized communities, those with disabilities, and individuals impacted by the digital divide),
  • assist in “go/no-go” decisions for an AI system.
  • consider conflicts of interest, or undue influence, related to the organizational team being assessed.

See the MAP function playbook guidance for more information relating to impact assessments.

>Suggested Actions

  • Establish impact assessment policies and processes for AI systems used by the organization.
  • Align organizational impact assessment activities with relevant regulatory or legal requirements.
  • Verify that impact assessment activities are appropriate to evaluate the potential negative impact of a system and how quickly a system changes, and that assessments are applied on a regular basis.
  • Utilize impact assessments to inform broader evaluations of AI system risk.

>Documentation Guidance

Organizations can document the following

  • How has the entity identified and mitigated potential impacts of bias in the data, including inequitable or discriminatory outcomes?
  • How has the entity documented the AI system’s data provenance, including sources, origins, transformations, augmentations, labels, dependencies, constraints, and metadata?
  • To what extent has the entity clearly defined technical specifications and requirements for the AI system?
  • To what extent has the entity documented and communicated the AI system’s development, testing methodology, metrics, and performance outcomes?
  • Have you documented and explained that machine errors may differ from human errors?

AI Transparency Resources

  • GAO-21-519SP: AI Accountability Framework for Federal Agencies & Other Entities.
  • Datasheets for Datasets.

>References

Dillon Reisman, Jason Schultz, Kate Crawford, Meredith Whittaker, “Algorithmic Impact Assessments: A Practical Framework For Public Agency Accountability,” AI Now Institute, 2018.

H.R. 2231, 116th Cong. (2019).

BSA The Software Alliance (2021) Confronting Bias: BSA’s Framework to Build Trust in AI.

Anthony M. Barrett, Dan Hendrycks, Jessica Newman and Brandie Nonnecke. Actionable Guidance for High-Consequence AI Risk Management: Towards Standards Addressing AI Catastrophic Risks. ArXiv abs/2206.08966 (2022) https://arxiv.org/abs/2206.08966

David Wright, “Making Privacy Impact Assessments More Effective." The Information Society 29, 2013.

Konstantinia Charitoudi and Andrew Blyth. A Socio-Technical Approach to Cyber Risk Management and Impact Assessment. Journal of Information Security 4, 1 (2013), 33-41.

Emanuel Moss, Elizabeth Anne Watkins, Ranjit Singh, Madeleine Clare Elish, & Jacob Metcalf. 2021. “Assembling Accountability: Algorithmic Impact Assessment for the Public Interest”.

Microsoft. Responsible AI Impact Assessment Template. 2022.

Microsoft. Responsible AI Impact Assessment Guide. 2022.

Microsoft. Foundations of assessing harm. 2022.

Mauritz Kop, “AI Impact Assessment & Code of Conduct,” Futurium, May 2019.

Dillon Reisman, Jason Schultz, Kate Crawford, and Meredith Whittaker, “Algorithmic Impact Assessments: A Practical Framework For Public Agency Accountability,” AI Now, Apr. 2018.

Andrew D. Selbst, “An Institutional View Of Algorithmic Impact Assessments,” Harvard Journal of Law & Technology, vol. 35, no. 1, 2021

Ada Lovelace Institute. 2022. Algorithmic Impact Assessment: A Case Study in Healthcare. Accessed July 14, 2022.

Kathy Baxter, AI Ethics Maturity Model, Salesforce

Ravit Dotan, Borhane Blili-Hamelin, Ravi Madhavan, Jeanna Matthews, Joshua Scarpino, & Carol Anderson. (2024). A Flexible Maturity Model for AI Governance Based on the NIST AI Risk Management Framework [Technical Report]. IEEE.

>AI Actors

AI Design
AI Development
AI Deployment
Operation and Monitoring

>Topics

Risk Culture
Governance
Impact Assessment

>Cross-Framework Mappings

Ask AI

Configure your API key to use AI features.