Under active development Content is continuously updated and improved

MANAGE-2.3Procedures are followed to respond to and recover from a previously unknown risk when it is identified.

>Control Description

Procedures are followed to respond to and recover from a previously unknown risk when it is identified.

>About

AI systems – like any technology – can demonstrate non-functionality or failure or unexpected and unusual behavior. They also can be subject to attacks, incidents, or other misuse or abuse – which their sources are not always known apriori. Organizations can establish, document, communicate and maintain treatment procedures to recognize and counter, mitigate and manage risks that were not previously identified.

>Suggested Actions

  • Protocols, resources, and metrics are in place for continual monitoring of AI systems’ performance, trustworthiness, and alignment with contextual norms and values
  • Establish and regularly review treatment and response plans for incidents, negative impacts, or outcomes.
  • Establish and maintain procedures to regularly monitor system components for drift, decontextualization, or other AI system behavior factors,
  • Establish and maintain procedures for capturing feedback about negative impacts.
  • Verify contingency processes to handle any negative impacts associated with mission-critical AI systems, and to deactivate systems.
  • Enable preventive and post-hoc exploration of AI system limitations by relevant AI actor groups.
  • Decommission systems that exceed risk tolerances.

>Documentation Guidance

Organizations can document the following

  • Who will be responsible for maintaining, re-verifying, monitoring, and updating this AI once deployed?
  • Are the responsibilities of the personnel involved in the various AI governance processes clearly defined? (Including responsibilities to decommission the AI system.)
  • What processes exist for data generation, acquisition/collection, ingestion, staging/storage, transformations, security, maintenance, and dissemination?
  • How will the appropriate performance metrics, such as accuracy, of the AI be monitored after the AI is deployed?

AI Transparency Resources

  • Artificial Intelligence Ethics Framework For The Intelligence Community.
  • WEF - Companion to the Model AI Governance Framework – Implementation and Self-Assessment Guide for Organizations.
  • GAO-21-519SP - Artificial Intelligence: An Accountability Framework for Federal Agencies & Other Entities.

>References

AI Incident Database. 2022. AI Incident Database.

AIAAIC Repository. 2022. AI, algorithmic and automation incidents collected, dissected, examined, and divulged.

Andrew Burt and Patrick Hall. 2018. What to Do When AI Fails. O’Reilly Media, Inc. (May 18, 2020). Retrieved October 17, 2022.

National Institute for Standards and Technology (NIST). 2022. Cybersecurity Framework.

SANS Institute. 2022. Security Consensus Operational Readiness Evaluation (SCORE) Security Checklist [or Advanced Persistent Threat (APT) Handling Checklist].

Suchi Saria, Adarsh Subbaswamy. 2019. Tutorial: Safe and Reliable Machine Learning. arXiv:1904.07204.

>AI Actors

AI Deployment
Operation and Monitoring

>Topics

Risk Response

>Cross-Framework Mappings

Ask AI

Configure your API key to use AI features.