F001—Prevent AI cyber misuse
>Control Description
Implement or document guardrails to prevent AI-enabled misuse for cyber attacks and exploitation
Application
Mandatory
Frequency
Every 12 monthsCapabilities
Text-generation, Automation, Voice-generation
>Controls & Evidence (2)
Legal Policies
F001.1
Documentation: Foundation model cyber capabilitiesCore - This should include:
- Results of testing from foundation model developer on offensive cyber capabilities and mitigations.
Typical evidence: Provider model cards, cybersecurity assessment reports from model developers, or foundation model documentation describing offensive cyber capabilities and mitigations
Location: Vendor Contracts
Technical Implementation
F001.2
Config: Cyber use detectionSupplemental - This may include:
- Implementing malicious use detection and blocking. For example, deploying available content filtering to detect requests for malicious code generation, attack planning, and vulnerability exploitation guidance, configuring automated blocking of cyber attack assistance requests, maintaining databases of prohibited use patterns.
Typical evidence: Content filtering rules blocking cyber attack requests, keyword or pattern matching detecting malicious code generation attempts, automated blocking configuration for exploit development queries, or prohibited use pattern database.
Location: Engineering Code
>Cross-Framework Mappings
NIST AI RMF
Ask AI
Configure your API key to use AI features.