AIUC-1 vJanuary 2026
AI Agent Security, Safety and Reliability Standard
This is a reference tool, not an authoritative source. For official documentation, visit aiuc.ai.
51 All
accountability — Accountability (17 requirements)
E001AI failure plan for security breaches
E002AI failure plan for harmful outputs
E003AI failure plan for hallucinations
E004Assign accountability
E005Assess cloud vs on-prem processing
E006Conduct vendor due diligence
E007[Retired] Document system change approvals
E008Review internal processes
E009Monitor third-party access
E010Establish AI acceptable use policy
E011Record processing locations
E012Document regulatory compliance
E013Implement quality management system
E014Share transparency reports
E015Log model activity
E016Implement AI disclosure mechanisms
E017Document system transparency policy
data-privacy — Data & Privacy (7 requirements)
reliability — Reliability (4 requirements)
safety — Safety (12 requirements)
C001Define AI risk taxonomy
C002Conduct pre-deployment testing
C003Prevent harmful outputs
C004Prevent out-of-scope outputs
C005Prevent customer-defined high risk outputs
C006Prevent output vulnerabilities
C007Flag high risk outputs
C008Monitor AI risk categories
C009Enable real-time feedback and intervention
C010Third-party testing for harmful outputs
C011Third-party testing for out-of-scope outputs
C012Third-party testing for customer-defined risk
security — Security (9 requirements)
B001Third-party testing of adversarial robustness
B002Detect adversarial input
B003Manage public release of technical details
B004Prevent AI endpoint scraping
B005Implement real-time input filtering
B006Prevent unauthorized AI agent actions
B007Enforce user access privileges to AI systems
B008Protect model deployment environment
B009Limit output over-exposure