Under active development Content is continuously updated and improved

MANAGE-4.2Measurable activities for continual improvements are integrated into AI system updates and include regular engagement with interested parties, including relevant AI actors.

>Control Description

Measurable activities for continual improvements are integrated into AI system updates and include regular engagement with interested parties, including relevant AI actors.

>About

Regular monitoring processes enable system updates to enhance performance and functionality in accordance with regulatory and legal frameworks, and organizational and contextual values and norms. These processes also facilitate analyses of root causes, system degradation, drift, near-misses, and failures, and incident response and documentation.

AI actors across the lifecycle have many opportunities to capture and incorporate external feedback about system performance, limitations, and impacts, and implement continuous improvements. Improvements may not always be to model pipeline or system processes, and may instead be based on metrics beyond accuracy or other quality performance measures. In these cases, improvements may entail adaptations to business or organizational procedures or practices. Organizations are encouraged to develop improvements that will maintain traceability and transparency for developers, end users, auditors, and relevant AI actors.

>Suggested Actions

  • Integrate trustworthiness characteristics into protocols and metrics used for continual improvement.
  • Establish processes for evaluating and integrating feedback into AI system improvements.
  • Assess and evaluate alignment of proposed improvements with relevant regulatory and legal frameworks
  • Assess and evaluate alignment of proposed improvements connected to the values and norms within the context of use.
  • Document the basis for decisions made relative to tradeoffs between trustworthy characteristics, system risks, and system opportunities

>Documentation Guidance

Organizations can document the following

  • How will user and other forms of stakeholder engagement be integrated into the model development process and regular performance review once deployed?
  • To what extent can users or parties affected by the outputs of the AI system test the AI system and provide feedback?
  • To what extent has the entity defined and documented the regulatory environment—including minimum requirements in laws and regulations?

AI Transparency Resources

  • GAO-21-519SP - Artificial Intelligence: An Accountability Framework for Federal Agencies & Other Entities,
  • Artificial Intelligence Ethics Framework For The Intelligence Community.

>References

Yen, Po-Yin, et al. "Development and Evaluation of Socio-Technical Metrics to Inform HIT Adaptation."

Carayon, Pascale, and Megan E. Salwei. "Moving toward a sociotechnical systems approach to continuous health information technology design: the path forward for improving electronic health record usability and reducing clinician burnout." Journal of the American Medical Informatics Association 28.5 (2021): 1026-1028.

Mishra, Deepa, et al. "Organizational capabilities that enable big data and predictive analytics diffusion and organizational performance: A resource-based perspective." Management Decision (2018).

>AI Actors

TEVV
AI Design
AI Development
AI Deployment
Operation and Monitoring
End-Users
Affected Individuals and Communities

>Topics

Monitoring
Impact Assessment
Risk Assessment
Continual Improvement

>Cross-Framework Mappings

Ask AI

Configure your API key to use AI features.