MEASURE-1 — Measure 1: Risk Measurement Approaches and Metrics
14 requirements in the Measure 1: Risk Measurement Approaches and Metrics function
MEASURE 1.1Approaches and metrics for measurement of AI risks enumerated during the MAP function are selected
MS-1.1-001Employ methods to trace the origin and modifications of digital content
MS-1.1-002Integrate tools designed to analyze content provenance and detect data anomalies, verify the
MS-1.1-003Disaggregate evaluation metrics by demographic factors to identify any discrepancies in how
MS-1.1-004Develop a suite of metrics to evaluate structured public feedback exercises informed by
MS-1.1-005Evaluate novel methods and technologies for the measurement of GAI-related risks including in
MS-1.1-006Implement continuous monitoring of GAI system impacts to identify whether GAI outputs are
MS-1.1-007Evaluate the quality and integrity of data used in training and the provenance of AI-generated
MS-1.1-008Define use cases, contexts of use, capabilities, and negative impacts where structured human
MS-1.1-009Track and document risks or opportunities related to all GAI risks that cannot be measured
MEASURE 1.3Internal experts who did not serve as front-line developers for the system and/or independent
MS-1.3-001Define relevant groups of interest (e.g., demographic groups, subject matter experts, experience
MS-1.3-002Engage in internal and external evaluations, GAI red-teaming, impact assessments, or other
MS-1.3-003Verify those conducting structured human feedback exercises are not directly involved in system