Under active development Content is continuously updated and improved

MAP-3.2Potential costs, including non-monetary costs, which result from expected or realized AI errors or system functionality and trustworthiness - as connected to organizational risk tolerance - are examined and documented.

>Control Description

Potential costs, including non-monetary costs, which result from expected or realized AI errors or system functionality and trustworthiness - as connected to organizational risk tolerance - are examined and documented.

>About

Anticipating negative impacts of AI systems is a difficult task. Negative impacts can be due to many factors, such as system non-functionality or use outside of its operational limits, and may range from minor annoyance to serious injury, financial losses, or regulatory enforcement actions. AI actors can work with a broad set of stakeholders to improve their capacity for understanding systems’ potential impacts – and subsequently – systems’ risks.

>Suggested Actions

  • Perform context analysis to map potential negative impacts arising from not integrating trustworthiness characteristics. When negative impacts are not direct or obvious, AI actors can engage with stakeholders external to the team that developed or deployed the AI system, and potentially impacted communities, to examine and document:
  • Who could be harmed?
  • What could be harmed?
  • When could harm arise?
  • How could harm arise?
  • Identify and implement procedures for regularly evaluating the qualitative and quantitative costs of internal and external AI system failures. Develop actions to prevent, detect, and/or correct potential risks and related impacts. Regularly evaluate failure costs to inform go/no-go deployment decisions throughout the AI system lifecycle.

>Documentation Guidance

Organizations can document the following

  • To what extent does the system/entity consistently measure progress towards stated goals and objectives?
  • To what extent can users or parties affected by the outputs of the AI system test the AI system and provide feedback?
  • Have you documented and explained that machine errors may differ from human errors?

AI Transparency Resources

  • Intel.gov: AI Ethics Framework for Intelligence Community - 2020.
  • GAO-21-519SP: AI Accountability Framework for Federal Agencies & Other Entities.
  • Assessment List for Trustworthy AI (ALTAI) - The High-Level Expert Group on AI – 2019. LINK,

>References

Abagayle Lee Blank. 2019. Computer vision machine learning and future-oriented ethics. Honors Project. Seattle Pacific University (SPU), Seattle, WA.

Margarita Boyarskaya, Alexandra Olteanu, and Kate Crawford. 2020. Overcoming Failures of Imagination in AI Infused System Development and Deployment. arXiv:2011.13416.

Jeff Patton. 2014. User Story Mapping. O'Reilly, Sebastopol, CA.

Margarita Boenig-Liptsin, Anissa Tanweer & Ari Edmundson (2022) Data Science Ethos Lifecycle: Interplay of ethical thinking and data science practice, Journal of Statistics and Data Science Education, DOI: 10.1080/26939169.2022.2089411

J. Cohen, D. S. Katz, M. Barker, N. Chue Hong, R. Haines and C. Jay, "The Four Pillars of Research Software Engineering," in IEEE Software, vol. 38, no. 1, pp. 97-105, Jan.-Feb. 2021, doi: 10.1109/MS.2020.2973362.

National Academies of Sciences, Engineering, and Medicine 2022. Fostering Responsible Computing Research: Foundations and Practices. Washington, DC: The National Academies Press.

>AI Actors

AI Design
AI Development
Operation and Monitoring
AI Design
AI Impact Assessment

>Topics

Impact Assessment
Trustworthy Characteristics
Validity and Reliability
Safety
Secure and Resilient
Accountability and Transparency
Explainability and Interpretability
Privacy
Fairness and Bias

>Cross-Framework Mappings

Ask AI

Configure your API key to use AI features.