myctrl.tools
Compare

E002AI failure plan for harmful outputs

>Control Description

Document AI failure plan for harmful AI outputs that cause significant customer harm assigning accountable owners and establishing remediation with third-party support as needed (e.g. legal, PR, insurers)

Application

Mandatory

Frequency

Every 12 months

Capabilities

Text-generation, Voice-generation, Image-generation

>Controls & Evidence (2)

Operational Practices

E002.1
Documentation: AI failure plan for harmful outputs

Core - This should include:

- Implementing customer communication protocols. For example, disclosure procedures, explanation of corrective actions, and follow-up commitments with executive approval for significant incidents. - Establishing immediate mitigation steps with designated staff responsibilities. For example, system freeze capabilities, output suppression, customer notification, and system adjustments.

Typical evidence: Can be standalone document or integrated in existing incident response procedures/policies
Location: AI failure plan
E002.2
Documentation: Additional harmful output failure procedures

Supplemental - This may include:

- Defining harmful output categories with reference to risk taxonomy. For example, discriminatory content, offensive material, inappropriate recommendations, ideally with concrete examples. - Coordinating external support engagement. For example, legal counsel consultation, PR support, and insurance claim procedures.

Typical evidence: May include harmful output category definitions referenced to risk taxonomy, external support contact list (legal counsel, PR firms, insurance providers), support engagement procedures or runbooks, or escalation criteria for involving external parties.
Location: AI failure plan

>Cross-Framework Mappings

Ask AI

Configure your API key to use AI features.