Under active development Content is continuously updated and improved

GOVERN-1.5Ongoing monitoring and periodic review of the risk management process and its outcomes are planned, organizational roles and responsibilities are clearly defined, including determining the frequency of periodic review.

>Control Description

Ongoing monitoring and periodic review of the risk management process and its outcomes are planned, organizational roles and responsibilities are clearly defined, including determining the frequency of periodic review.

>About

AI systems are dynamic and may perform in unexpected ways once deployed or after deployment. Continuous monitoring is a risk management process for tracking unexpected issues and performance changes, in real-time or at a specific frequency, across the AI system lifecycle.

Incident response and “appeal and override” are commonly used processes in information technology management. These processes enable real-time flagging of potential incidents, and human adjudication of system outcomes.

Establishing and maintaining incident response plans can reduce the likelihood of additive impacts during an AI incident. Smaller organizations which may not have fulsome governance programs, can utilize incident response plans for addressing system failures, abuse or misuse.

>Suggested Actions

  • Establish policies to allocate appropriate resources and capacity for assessing impacts of AI systems on individuals, communities and society.
  • Establish policies and procedures for monitoring and addressing AI system performance and trustworthiness, including bias and security problems, across the lifecycle of the system.
  • Establish policies for AI system incident response, or confirm that existing incident response policies apply to AI systems.
  • Establish policies to define organizational functions and personnel responsible for AI system monitoring and incident response activities.
  • Establish mechanisms to enable the sharing of feedback from impacted individuals or communities about negative impacts from AI systems.
  • Establish mechanisms to provide recourse for impacted individuals or communities to contest problematic AI system outcomes.
  • Establish opt-out mechanisms.

>Documentation Guidance

Organizations can document the following

  • To what extent does the system/entity consistently measure progress towards stated goals and objectives?
  • Did your organization implement a risk management system to address risks involved in deploying the identified AI solution (e.g. personnel risk or changes to commercial objectives)?
  • Did your organization address usability problems and test whether user interfaces served their intended purposes?

AI Transparency Resources

  • GAO-21-519SP: AI Accountability Framework for Federal Agencies & Other Entities.
  • WEF Model AI Governance Framework Assessment 2020.

>References

National Institute of Standards and Technology. (2018). Framework for improving critical infrastructure cybersecurity.

National Institute of Standards and Technology. (2012). Computer Security Incident Handling Guide. NIST Special Publication 800-61 Revision 2.

>AI Actors

Governance and Oversight
Operation and Monitoring

>Topics

Monitoring
Governance
Continual Improvement

>Cross-Framework Mappings

Ask AI

Configure your API key to use AI features.