map — Map
18 actions in the Map function
MAP-1.1Intended purpose, potentially beneficial uses, context-specific laws, norms and expectations, and prospective settings in which the AI system will be deployed are understood and documented. Considerations include: specific set or types of users along with their expectations; potential positive and negative impacts of system uses to individuals, communities, organizations, society, and the planet; assumptions and related limitations about AI system purposes; uses and risks across the development or product AI lifecycle; TEVV and system metrics.
MAP-1.2Inter-disciplinary AI actors, competencies, skills and capacities for establishing context reflect demographic diversity and broad domain and user experience expertise, and their participation is documented. Opportunities for interdisciplinary collaboration are prioritized.
MAP-1.3The organization’s mission and relevant goals for the AI technology are understood and documented.
MAP-1.4The business value or context of business use has been clearly defined or – in the case of assessing existing AI systems – re-evaluated.
MAP-1.5Organizational risk tolerances are determined and documented.
MAP-1.6System requirements (e.g., “the system shall respect the privacy of its users”) are elicited from and understood by relevant AI actors. Design decisions take socio-technical implications into account to address AI risks.
MAP-2.1The specific task, and methods used to implement the task, that the AI system will support is defined (e.g., classifiers, generative models, recommenders).
MAP-2.2Information about the AI system’s knowledge limits and how system output may be utilized and overseen by humans is documented. Documentation provides sufficient information to assist relevant AI actors when making informed decisions and taking subsequent actions.
MAP-2.3Scientific integrity and TEVV considerations are identified and documented, including those related to experimental design, data collection and selection (e.g., availability, representativeness, suitability), system trustworthiness, and construct validation.
MAP-3.1Potential benefits of intended AI system functionality and performance are examined and documented.
MAP-3.2Potential costs, including non-monetary costs, which result from expected or realized AI errors or system functionality and trustworthiness - as connected to organizational risk tolerance - are examined and documented.
MAP-3.3Targeted application scope is specified and documented based on the system’s capability, established context, and AI system categorization.
MAP-3.4Processes for operator and practitioner proficiency with AI system performance and trustworthiness – and relevant technical standards and certifications – are defined, assessed and documented.
MAP-3.5Processes for human oversight are defined, assessed, and documented in accordance with organizational policies from GOVERN function.
MAP-4.1Approaches for mapping AI technology and legal risks of its components – including the use of third-party data or software – are in place, followed, and documented, as are risks of infringement of a third-party’s intellectual property or other rights.
MAP-4.2Internal risk controls for components of the AI system including third-party AI technologies are identified and documented.
MAP-5.1Likelihood and magnitude of each identified impact (both potentially beneficial and harmful) based on expected use, past uses of AI systems in similar contexts, public incident reports, feedback from those external to the team that developed or deployed the AI system, or other data are identified and documented.
MAP-5.2Practices and personnel for supporting regular engagement with relevant AI actors and integrating feedback about positive, negative, and unanticipated impacts are in place and documented.