Under active development Content is continuously updated and improved

MEASURE-3.2Risk tracking approaches are considered for settings where AI risks are difficult to assess using currently available measurement techniques or where metrics are not yet available.

>Control Description

Risk tracking approaches are considered for settings where AI risks are difficult to assess using currently available measurement techniques or where metrics are not yet available.

>About

Risks identified in the Map function may be complex, emerge over time, or difficult to measure. Systematic methods for risk tracking, including novel measurement approaches, can be established as part of regular monitoring and improvement processes.

>Suggested Actions

  • Establish processes for tracking emergent risks that may not be measurable with current approaches. Some processes may include:
  • Recourse mechanisms for faulty AI system outputs.
  • Bug bounties.
  • Human-centered design approaches.
  • User-interaction and experience research.
  • Participatory stakeholder engagement with affected or potentially impacted individuals and communities.
  • Identify AI actors responsible for tracking emergent risks and inventory methods.
  • Determine and document the rate of occurrence and severity level for complex or difficult-to-measure risks when:
  • Prioritizing new measurement approaches for deployment tasks.
  • Allocating AI system risk management resources.
  • Evaluating AI system improvements.
  • Making go/no-go decisions for subsequent system iterations.

>Documentation Guidance

Organizations can document the following

  • Who is ultimately responsible for the decisions of the AI and is this person aware of the intended uses and limitations of the analytic?
  • Who will be responsible for maintaining, re-verifying, monitoring, and updating this AI once deployed?
  • To what extent does the entity communicate its AI strategic goals and objectives to the community of stakeholders?
  • Given the purpose of this AI, what is an appropriate interval for checking whether it is still accurate, unbiased, explainable, etc.? What are the checks for this model?
  • If anyone believes that the AI no longer meets this ethical framework, who will be responsible for receiving the concern and as appropriate investigating and remediating the issue? Do they have authority to modify, limit, or stop the use of the AI?

AI Transparency Resources

  • GAO-21-519SP - Artificial Intelligence: An Accountability Framework for Federal Agencies & Other Entities.
  • Artificial Intelligence Ethics Framework For The Intelligence Community.

>References

ISO. "ISO 9241-210:2019 Ergonomics of human-system interaction — Part 210: Human-centred design for interactive systems." 2nd ed. ISO Standards, July 2019.

Mark C. Paulk, Bill Curtis, Mary Beth Chrissis, and Charles V. Weber. “Capability Maturity Model, Version 1.1.” IEEE Software 10, no. 4 (1993): 18–27.

Jeff Patton, Peter Economy, Martin Fowler, Alan Cooper, and Marty Cagan. User Story Mapping: Discover the Whole Story, Build the Right Product. O'Reilly, 2014.

Rumman Chowdhury and Jutta Williams. "Introducing Twitter’s first algorithmic bias bounty challenge." Twitter Engineering Blog, July 30, 2021.

HackerOne. "Twitter Algorithmic Bias." HackerOne, August 8, 2021.

Josh Kenway, Camille François, Sasha Costanza-Chock, Inioluwa Deborah Raji, and Joy Buolamwini. "Bug Bounties for Algorithmic Harms?" Algorithmic Justice League, January 2022.

Microsoft. “Community Jury.” Microsoft Learn's Azure Application Architecture Guide, 2023.

Margarita Boyarskaya, Alexandra Olteanu, and Kate Crawford. "Overcoming Failures of Imagination in AI Infused System Development and Deployment." arXiv preprint, submitted December 10, 2020.

>AI Actors

TEVV
Domain Experts
AI Impact Assessment
Operation and Monitoring

>Topics

Monitoring
Continual Improvement

>Cross-Framework Mappings

Ask AI

Configure your API key to use AI features.