{"catalog":{"description":"The NIST AI RMF Playbook is designed to inform AI actors and make the AI RMF more usable. The AI RMF Playbook provides actionable suggestions to help produce or evaluate trustworthy AI systems, cultivate a responsible AI environment where risk and impact are taken into account, and increase organizational capacity for comprehensive socio-technical approaches to the design, development, deployment, and evaluation of AI technology.","uuid":"8d1afa1e-0c20-4844-814a-8a650c630f46","datePublished":"2023-01-26T00:00:00","master":false,"url":"https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.100-1.pdf","abstract":"The NIST AI RMF is a framework to better manage risks to individuals, organizations, and society associated with articical intelligence (AI). the Framework is intended for voluntary use to improve the ability to incorporate trustworhtiness considerations into design, development, use, and evaluation of AI products, services, and systems.","defaultName":"NIST-AI-RMF-Playbook-2023","title":"NIST Artificial Intelligence Risk Management Playbook (AI RMF 1.0)","lastRevisionDate":"2024-01-29","regulationDatePublished":"2023-01-29","keywords":"information technology, artificial intelligence","securityControls":[{"description":"GOVERN 1.1 - Legal and regulatory requirements involving AI are understood, managed, and documented.
About
AI systems may be subject to specific applicable legal and regulatory requirements. Some legal requirements can mandate (e.g., nondiscrimination, data privacy and security controls) documentation, disclosure, and increased AI system transparency. These requirements are complex and may not be applicable or differ across applications and contexts. \n \nFor example, AI system testing processes for bias measurement, such as disparate impact, are not applied uniformly within the legal context. Disparate impact is broadly defined as a facially neutral policy or practice that disproportionately harms a group based on a protected trait. Notably, some modeling algorithms or debiasing techniques that rely on demographic information, could also come into tension with legal prohibitions on disparate treatment (i.e., intentional discrimination).\n\nAdditionally, some intended users of AI systems may not have consistent or reliable access to fundamental internet technologies (a phenomenon widely described as the “digital divide”) or may experience difficulties interacting with AI systems due to disabilities or impairments. Such factors may mean different communities experience bias or other negative impacts when trying to access AI systems. Failure to address such design issues may pose legal risks, for example in employment related activities affecting persons with disabilities.
Suggested Actions
\n","references":"","uuid":"b5c258e1-276f-44c8-ab8b-d4a7e9cd80dc","family":"Govern","subControls":"","weight":0,"title":"GOVERN 1.1","enhancements":"","relatedControls":"","catalogueID":4836,"practiceLevel":"","assessmentPlan":"Organizations can document the following: \nAI Transparency Resources:","mappings":"","controlId":"GOVERN 1.1"},{"description":"GOVERN 1.2 - The characteristics of trustworthy AI are integrated into organizational policies, processes, and procedures.
About
Policies, processes, and procedures are central components of effective AI risk management and fundamental to individual and organizational accountability. All stakeholders benefit from policies, processes, and procedures which require preventing harm by design and default. \n\nOrganizational policies and procedures will vary based on available resources and risk profiles, but can help systematize AI actor roles and responsibilities throughout the AI lifecycle. Without such policies, risk management can be subjective across the organization, and exacerbate rather than minimize risks over time. Polices, or summaries thereof, are understandable to relevant AI actors. Policies reflect an understanding of the underlying metrics, measurements, and tests that are necessary to support policy and AI system design, development, deployment and use.\n\nLack of clear information about responsibilities and chains of command will limit the effectiveness of risk management.
Suggested Actions
\n","references":"","uuid":"5ce1057c-977b-4e9e-8946-2f92ecc2a17b","family":"Govern","subControls":"","weight":0,"title":"GOVERN 1.2","enhancements":"","relatedControls":"","catalogueID":4836,"practiceLevel":"","assessmentPlan":"Organizations can document the following: \nAI Transparency Resources:","mappings":"","controlId":"GOVERN 1.2"},{"description":"GOVERN 1.3 - Processes and procedures are in place to determine the needed level of risk management activities based on the organization's risk tolerance.
About
Risk management resources are finite in any organization. Adequate AI governance policies delineate the mapping, measurement, and prioritization of risks to allocate resources toward the most material issues for an AI system to ensure effective risk management. Policies may specify systematic processes for assigning mapped and measured risks to standardized risk scales. \n\nAI risk tolerances range from negligible to critical – from, respectively, almost no risk to risks that can result in irredeemable human, reputational, financial, or environmental losses. Risk tolerance rating policies consider different sources of risk, (e.g., financial, operational, safety and wellbeing, business, reputational, or model risks). A typical risk measurement approach entails the multiplication, or qualitative combination, of measured or estimated impact and likelihood of impacts into a risk score (risk ≈ impact x likelihood). This score is then placed on a risk scale. Scales for risk may be qualitative, such as red-amber-green (RAG), or may entail simulations or econometric approaches. Impact assessments are a common tool for understanding the severity of mapped risks. In the most fulsome AI risk management approaches, all models are assigned to a risk level.
Suggested Actions
\n","references":"","uuid":"d96f05ae-77c8-4144-ae98-b050fdc5cf7b","family":"Govern","subControls":"","weight":0,"title":"GOVERN 1.3","enhancements":"","relatedControls":"","catalogueID":4836,"practiceLevel":"","assessmentPlan":"Organizations can document the following: \nAI Transparency Resources:","mappings":"","controlId":"GOVERN 1.3"},{"description":"GOVERN 1.4 - The risk management process and its outcomes are established through transparent policies, procedures, and other controls based on organizational risk priorities.
About
Clear policies and procedures relating to documentation and transparency facilitate and enhance efforts to communicate roles and responsibilities for the Map, Measure and Manage functions across the AI lifecycle. Standardized documentation can help organizations systematically integrate AI risk management processes and enhance accountability efforts. For example, by adding their contact information to a work product document, AI actors can improve communication, increase ownership of work products, and potentially enhance consideration of product quality. Documentation may generate downstream benefits related to improved system replicability and robustness. Proper documentation storage and access procedures allow for quick retrieval of critical information during a negative incident. Explainable machine learning efforts (models and explanatory methods) may bolster technical documentation practices by introducing additional information for review and interpretation by AI Actors.
Suggested Actions
\n","references":"","uuid":"408b3e7a-ea05-40b7-8a00-07d3dcf4dde3","family":"Govern","subControls":"","weight":0,"title":"GOVERN 1.4","enhancements":"","relatedControls":"","catalogueID":4836,"practiceLevel":"","assessmentPlan":"Organizations can document the following: \nAI Transparency Resources:","mappings":"","controlId":"GOVERN 1.4"},{"description":"GOVERN 1.5 - Ongoing monitoring and periodic review of the risk management process and its outcomes are planned, organizational roles and responsibilities are clearly defined, including determining the frequency of periodic review.
About
AI systems are dynamic and may perform in unexpected ways once deployed or after deployment. Continuous monitoring is a risk management process for tracking unexpected issues and performance changes, in real-time or at a specific frequency, across the AI system lifecycle.\n\nIncident response and “appeal and override” are commonly used processes in information technology management. These processes enable real-time flagging of potential incidents, and human adjudication of system outcomes.\n\nEstablishing and maintaining incident response plans can reduce the likelihood of additive impacts during an AI incident. Smaller organizations which may not have fulsome governance programs, can utilize incident response plans for addressing system failures, abuse or misuse.
Suggested Actions
\n","references":"","uuid":"ce9dc57e-2597-4364-a8f2-fd7e9a945a94","family":"Govern","subControls":"","weight":0,"title":"GOVERN 1.5","enhancements":"","relatedControls":"","catalogueID":4836,"practiceLevel":"","assessmentPlan":"Organizations can document the following: \nAI Transparency Resources:","mappings":"","controlId":"GOVERN 1.5"},{"description":"GOVERN 1.6 - Mechanisms are in place to inventory AI systems and are resourced according to organizational risk priorities.
About
An AI system inventory is an organized database of artifacts relating to an AI system or model. It may include system documentation, incident response plans, data dictionaries, links to implementation software or source code, names and contact information for relevant AI actors, or other information that may be helpful for model or system maintenance and incident response purposes. AI system inventories also enable a holistic view of organizational AI assets. A serviceable AI system inventory may allow for the quick resolution of:\n\n- specific queries for single models, such as “when was this model last refreshed?” \n- high-level queries across all models, such as, “how many models are currently deployed within our organization?” or “how many users are impacted by our models?” \n\nAI system inventories are a common element of traditional model risk management approaches and can provide technical, business and risk management benefits. Typically inventories capture all organizational models or systems, as partial inventories may not provide the value of a full inventory.
Suggested Actions
\n","references":"","uuid":"64ef278c-e1e1-41b4-b20d-4e3213009122","family":"Govern","subControls":"","weight":0,"title":"GOVERN 1.6","enhancements":"","relatedControls":"","catalogueID":4836,"practiceLevel":"","assessmentPlan":"Organizations can document the following: \nAI Transparency Resources:","mappings":"","controlId":"GOVERN 1.6"},{"description":"GOVERN 1.7 - Processes and procedures are in place for decommissioning and phasing out of AI systems safely and in a manner that does not increase risks or decrease the organization’s trustworthiness.
About
Irregular or indiscriminate termination or deletion of models or AI systems may be inappropriate and increase organizational risk. For example, AI systems may be subject to regulatory requirements or implicated in future security or legal investigations. To maintain trust, organizations may consider establishing policies and processes for the systematic and deliberate decommissioning of AI systems. Typically, such policies consider user and community concerns, risks in dependent and linked systems, and security, legal or regulatory concerns. Decommissioned models or systems may be stored in a model inventory along with active models, for an established length of time.
Suggested Actions
\n","references":"","uuid":"3b23dd62-e2a5-4061-89fb-b2dc67d165fe","family":"Govern","subControls":"","weight":0,"title":"GOVERN 1.7","enhancements":"","relatedControls":"","catalogueID":4836,"practiceLevel":"","assessmentPlan":"Organizations can document the following: \nAI Transparency Resources:","mappings":"","controlId":"GOVERN 1.7"},{"description":"GOVERN 2.1 - Roles and responsibilities and lines of communication related to mapping, measuring, and managing AI risks are documented and are clear to individuals and teams throughout the organization.
About
The development of a risk-aware organizational culture starts with defining responsibilities. For example, under some risk management structures, professionals carrying out test and evaluation tasks are independent from AI system developers and report through risk management functions or directly to executives. This kind of structure may help counter implicit biases such as groupthink or sunk cost fallacy and bolster risk management functions, so efforts are not easily bypassed or ignored.\n\nInstilling a culture where AI system design and implementation decisions can be questioned and course- corrected by empowered AI actors can enhance organizations’ abilities to anticipate and effectively manage risks before they become ingrained.
Suggested Actions
\n","references":"","uuid":"18d09cf8-8408-4864-b7a5-2e8bb612ba86","family":"Govern","subControls":"","weight":0,"title":"GOVERN 2.1","enhancements":"","relatedControls":"","catalogueID":4836,"practiceLevel":"","assessmentPlan":"Organizations can document the following: \nAI Transparency Resources:","mappings":"","controlId":"GOVERN 2.1"},{"description":"GOVERN 2.2 - The organization’s personnel and partners receive AI risk management training to enable them to perform their duties and responsibilities consistent with related policies, procedures, and agreements.
About
To enhance AI risk management adoption and effectiveness, organizations are encouraged to identify and integrate appropriate training curricula into enterprise learning requirements. Through regular training, AI actors can maintain awareness of:\n\n- AI risk management goals and their role in achieving them.\n- Organizational policies, applicable laws and regulations, and industry best practices and norms.\n\nSee [MAP 3.4]() and [3.5]() for additional relevant information.
Suggested Actions
\n","references":"","uuid":"577da62f-30f4-4399-b15e-66cedc140b89","family":"Govern","subControls":"","weight":0,"title":"GOVERN 2.2","enhancements":"","relatedControls":"","catalogueID":4836,"practiceLevel":"","assessmentPlan":"Organizations can document the following: \nAI Transparency Resources:","mappings":"","controlId":"GOVERN 2.2"},{"description":"GOVERN 2.3 - Executive leadership of the organization takes responsibility for decisions about risks associated with AI system development and deployment.
About
Senior leadership and members of the C-Suite in organizations that maintain an AI portfolio, should maintain awareness of AI risks, affirm the organizational appetite for such risks, and be responsible for managing those risks..\n\nAccountability ensures that a specific team and individual is responsible for AI risk management efforts. Some organizations grant authority and resources (human and budgetary) to a designated officer who ensures adequate performance of the institution’s AI portfolio (e.g. predictive modeling, machine learning).
Suggested Actions
\n","references":"","uuid":"ad59e016-cb6d-4e79-97d8-0f9c13fcf0d8","family":"Govern","subControls":"","weight":0,"title":"GOVERN 2.3","enhancements":"","relatedControls":"","catalogueID":4836,"practiceLevel":"","assessmentPlan":"Organizations can document the following: \nAI Transparency Resources:","mappings":"","controlId":"GOVERN 2.3"},{"description":"GOVERN 3.1 - Decision-makings related to mapping, measuring, and managing AI risks throughout the lifecycle is informed by a diverse team (e.g., diversity of demographics, disciplines, experience, expertise, and backgrounds).
About
A diverse team that includes AI actors with diversity of experience, disciplines, and backgrounds to enhance organizational capacity and capability for anticipating risks is better equipped to carry out risk management. Consultation with external personnel may be necessary when internal teams lack a diverse range of lived experiences or disciplinary expertise.\n\nTo extend the benefits of diversity, equity, and inclusion to both the users and AI actors, it is recommended that teams are composed of a diverse group of individuals who reflect a range of backgrounds, perspectives and expertise.\n\nWithout commitment from senior leadership, beneficial aspects of team diversity and inclusion can be overridden by unstated organizational incentives that inadvertently conflict with the broader values of a diverse workforce.
Suggested Actions
\n","references":"","uuid":"2efaef6e-f3dd-490d-9033-6ca85e6f9cbc","family":"Govern","subControls":"","weight":0,"title":"GOVERN 3.1","enhancements":"","relatedControls":"","catalogueID":4836,"practiceLevel":"","assessmentPlan":"Organizations can document the following: \nAI Transparency Resources:","mappings":"","controlId":"GOVERN 3.1"},{"description":"GOVERN 3.2 - Policies and procedures are in place to define and differentiate roles and responsibilities for human-AI configurations and oversight of AI systems.
About
Identifying and managing AI risks and impacts are enhanced when a broad set of perspectives and actors across the AI lifecycle, including technical, legal, compliance, social science, and human factors expertise is engaged. AI actors include those who operate, use, or interact with AI systems for downstream tasks, or monitor AI system performance. Effective risk management efforts include:\n\n- clear definitions and differentiation of the various human roles and responsibilities for AI system oversight and governance\n- recognizing and clarifying differences between AI system overseers and those using or interacting with AI systems.
Suggested Actions
\n","references":"","uuid":"27d0cadb-86e5-42b8-be69-80905c71d431","family":"Govern","subControls":"","weight":0,"title":"GOVERN 3.2","enhancements":"","relatedControls":"","catalogueID":4836,"practiceLevel":"","assessmentPlan":"Organizations can document the following: \nAI Transparency Resources:","mappings":"","controlId":"GOVERN 3.2"},{"description":"GOVERN 4.1 - Organizational policies, and practices are in place to foster a critical thinking and safety-first mindset in the design, development, deployment, and uses of AI systems to minimize negative impacts.
About
A risk culture and accompanying practices can help organizations effectively triage the most critical risks. Organizations in some industries implement three (or more) “lines of defense,” where separate teams are held accountable for different aspects of the system lifecycle, such as development, risk management, and auditing. While a traditional three-lines approach may be impractical for smaller organizations, leadership can commit to cultivating a strong risk culture through other means. For example, “effective challenge,” is a culture- based practice that encourages critical thinking and questioning of important design and implementation decisions by experts with the authority and stature to make such changes.\n\nRed-teaming is another risk measurement and management approach. This practice consists of adversarial testing of AI systems under stress conditions to seek out failure modes or vulnerabilities in the system. Red-teams are composed of external experts or personnel who are independent from internal AI actors.
Suggested Actions
\n","references":"","uuid":"b4602793-ce11-44ea-b47c-fe4c189f7f83","family":"Govern","subControls":"","weight":0,"title":"GOVERN 4.1","enhancements":"","relatedControls":"","catalogueID":4836,"practiceLevel":"","assessmentPlan":"Organizations can document the following: \nAI Transparency Resources:","mappings":"","controlId":"GOVERN 4.1"},{"description":"GOVERN 4.2 - Organizational teams document the risks and potential impacts of the AI technology they design, develop, deploy, evaluate and use, and communicate about the impacts more broadly.
About
Impact assessments are one approach for driving responsible technology development practices. And, within a specific use case, these assessments can provide a high-level structure for organizations to frame risks of a given algorithm or deployment. Impact assessments can also serve as a mechanism for organizations to articulate risks and generate documentation for managing and oversight activities when harms do arise.\n\nImpact assessments may:\n\n- be applied at the beginning of a process but also iteratively and regularly since goals and outcomes can evolve over time. \n- include perspectives from AI actors, including operators, users, and potentially impacted communities (including historically marginalized communities, those with disabilities, and individuals impacted by the digital divide), \n- assist in “go/no-go” decisions for an AI system. \n- consider conflicts of interest, or undue influence, related to the organizational team being assessed.\n\nSee the MAP function playbook guidance for more information relating to impact assessments.
Suggested Actions
\n","references":"","uuid":"46358637-35b1-4620-a752-47d2bbb5ce72","family":"Govern","subControls":"","weight":0,"title":"GOVERN 4.2","enhancements":"","relatedControls":"","catalogueID":4836,"practiceLevel":"","assessmentPlan":"Organizations can document the following: \nAI Transparency Resources:","mappings":"","controlId":"GOVERN 4.2"},{"description":"GOVERN 4.3 - Organizational practices are in place to enable AI testing, identification of incidents, and information sharing.
About
Identifying AI system limitations, detecting and tracking negative impacts and incidents, and sharing information about these issues with appropriate AI actors will improve risk management. Issues such as concept drift, AI bias and discrimination, shortcut learning or underspecification are difficult to identify using current standard AI testing processes. Organizations can institute in-house use and testing policies and procedures to identify and manage such issues. Efforts can take the form of pre-alpha or pre-beta testing, or deploying internally developed systems or products within the organization. Testing may entail limited and controlled in-house, or publicly available, AI system testbeds, and accessibility of AI system interfaces and outputs.\n\nWithout policies and procedures that enable consistent testing practices, risk management efforts may be bypassed or ignored, exacerbating risks or leading to inconsistent risk management activities.\n\nInformation sharing about impacts or incidents detected during testing or deployment can:\n\n* draw attention to AI system risks, failures, abuses or misuses, \n* allow organizations to benefit from insights based on a wide range of AI applications and implementations, and \n* allow organizations to be more proactive in avoiding known failure modes.\n\nOrganizations may consider sharing incident information with the AI Incident Database, the AIAAIC, users, impacted communities, or with traditional cyber vulnerability databases, such as the MITRE CVE list.
Suggested Actions
\n","references":"","uuid":"fe97de0d-8b99-4243-a90d-76aa28e72c30","family":"Govern","subControls":"","weight":0,"title":"GOVERN 4.3","enhancements":"","relatedControls":"","catalogueID":4836,"practiceLevel":"","assessmentPlan":"Organizations can document the following: \nAI Transparency Resources:","mappings":"","controlId":"GOVERN 4.3"},{"description":"GOVERN 5.1 - Organizational policies and practices are in place to collect, consider, prioritize, and integrate feedback from those external to the team that developed or deployed the AI system regarding the potential individual and societal impacts related to AI risks.
About
Beyond internal and laboratory-based system testing, organizational policies and practices may consider AI system fitness-for-purpose related to the intended context of use.\n\nParticipatory stakeholder engagement is one type of qualitative activity to help AI actors answer questions such as whether to pursue a project or how to design with impact in mind. This type of feedback, with domain expert input, can also assist AI actors to identify emergent scenarios and risks in certain AI applications. The consideration of when and how to convene a group and the kinds of individuals, groups, or community organizations to include is an iterative process connected to the system's purpose and its level of risk. Other factors relate to how to collaboratively and respectfully capture stakeholder feedback and insight that is useful, without being a solely perfunctory exercise.\n\nThese activities are best carried out by personnel with expertise in participatory practices, qualitative methods, and translation of contextual feedback for technical audiences.\n\nParticipatory engagement is not a one-time exercise and is best carried out from the very beginning of AI system commissioning through the end of the lifecycle. Organizations can consider how to incorporate engagement when beginning a project and as part of their monitoring of systems. Engagement is often utilized as a consultative practice, but this perspective may inadvertently lead to “participation washing.” Organizational transparency about the purpose and goal of the engagement can help mitigate that possibility.\n\nOrganizations may also consider targeted consultation with subject matter experts as a complement to participatory findings. Experts may assist internal staff in identifying and conceptualizing potential negative impacts that were previously not considered.
Suggested Actions
\n","references":"","uuid":"11c55046-768d-4c41-8e41-90eb1f4760bf","family":"Govern","subControls":"","weight":0,"title":"GOVERN 5.1","enhancements":"","relatedControls":"","catalogueID":4836,"practiceLevel":"","assessmentPlan":"Organizations can document the following: \nAI Transparency Resources:","mappings":"","controlId":"GOVERN 5.1"},{"description":"GOVERN 5.2 - Mechanisms are established to enable AI actors to regularly incorporate adjudicated feedback from relevant AI actors into system design and implementation.
About
Organizational policies and procedures that equip AI actors with the processes, knowledge, and expertise needed to inform collaborative decisions about system deployment improve risk management. These decisions are closely tied to AI systems and organizational risk tolerance.\n\nRisk tolerance, established by organizational leadership, reflects the level and type of risk the organization will accept while conducting its mission and carrying out its strategy. When risks arise, resources are allocated based on the assessed risk of a given AI system. Organizations typically apply a risk tolerance approach where higher risk systems receive larger allocations of risk management resources and lower risk systems receive less resources.
Suggested Actions
\n","references":"","uuid":"e8b26471-650e-4953-b813-07de516059fc","family":"Govern","subControls":"","weight":0,"title":"GOVERN 5.2","enhancements":"","relatedControls":"","catalogueID":4836,"practiceLevel":"","assessmentPlan":"Organizations can document the following: \nAI Transparency Resources:","mappings":"","controlId":"GOVERN 5.2"},{"description":"GOVERN 6.1 - Policies and procedures are in place that address AI risks associated with third-party entities, including risks of infringement of a third party’s intellectual property or other rights.
About
Risk measurement and management can be complicated by how customers use or integrate third-party data or systems into AI products or services, particularly without sufficient internal governance structures and technical safeguards. \n\nOrganizations usually engage multiple third parties for external expertise, data, software packages (both open source and commercial), and software and hardware platforms across the AI lifecycle. This engagement has beneficial uses and can increase complexities of risk management efforts.\n\nOrganizational approaches to managing third-party (positive and negative) risk may be tailored to the resources, risk profile, and use case for each system. Organizations can apply governance approaches to third-party AI systems and data as they would for internal resources — including open source software, publicly available data, and commercially available models.
Suggested Actions
\n","references":"","uuid":"0731a39e-e8fc-4312-acdd-40faa895b3da","family":"Govern","subControls":"","weight":0,"title":"GOVERN 6.1","enhancements":"","relatedControls":"","catalogueID":4836,"practiceLevel":"","assessmentPlan":"Organizations can document the following: \nAI Transparency Resources:","mappings":"","controlId":"GOVERN 6.1"},{"description":"GOVERN 6.2 - Contingency processes are in place to handle failures or incidents in third-party data or AI systems deemed to be high-risk.
About
To mitigate the potential harms of third-party system failures, organizations may implement policies and procedures that include redundancies for covering third-party functions.
Suggested Actions
\n","references":"","uuid":"bde0bf99-a8ec-4be1-bb58-d1a6b036c333","family":"Govern","subControls":"","weight":0,"title":"GOVERN 6.2","enhancements":"","relatedControls":"","catalogueID":4836,"practiceLevel":"","assessmentPlan":"Organizations can document the following: \nAI Transparency Resources:","mappings":"","controlId":"GOVERN 6.2"},{"description":"MANAGE 1.1 - A determination is as to whether the AI system achieves its intended purpose and stated objectives and whether its development or deployment should proceed.
About
AI systems may not necessarily be the right solution for a given business task or problem. A standard risk management practice is to formally weigh an AI system’s negative risks against its benefits, and to determine if the AI system is an appropriate solution. Tradeoffs among trustworthiness characteristics —such as deciding to deploy a system based on system performance vs system transparency–may require regular assessment throughout the AI lifecycle.
Suggested Actions
\n","references":"","uuid":"f99a66d5-1073-4da8-91b0-1780fdb8c233","family":"Manage","subControls":"","weight":0,"title":"MANAGE 1.1","enhancements":"","relatedControls":"","catalogueID":4836,"practiceLevel":"","assessmentPlan":"Organizations can document the following: \nAI Transparency Resources:","mappings":"","controlId":"MANAGE 1.1"},{"description":"MANAGE 1.2 - Treatment of documented AI risks is prioritized based on impact, likelihood, or available resources or methods.
About
Risk refers to the composite measure of an event’s probability of occurring and the magnitude (or degree) of the consequences of the corresponding events. The impacts, or consequences, of AI systems can be positive, negative, or both and can result in opportunities or risks. \n\nOrganizational risk tolerances are often informed by several internal and external factors, including existing industry practices, organizational values, and legal or regulatory requirements. Since risk management resources are often limited, organizations usually assign them based on risk tolerance. AI risks that are deemed more serious receive more oversight attention and risk management resources.
Suggested Actions
\n","references":"","uuid":"9dab2a41-c340-4522-9e86-03ac4d413e74","family":"Manage","subControls":"","weight":0,"title":"MANAGE 1.2","enhancements":"","relatedControls":"","catalogueID":4836,"practiceLevel":"","assessmentPlan":"Organizations can document the following: \nAI Transparency Resources:","mappings":"","controlId":"MANAGE 1.2"},{"description":"MANAGE 1.3 - Responses to the AI risks deemed high priority as identified by the Map function, are developed, planned, and documented. Risk response options can include mitigating, transferring, avoiding, or accepting.
About
Outcomes from GOVERN-1, MAP-5 and MEASURE-2, can be used to address and document identified risks based on established risk tolerances. Organizations can follow existing regulations and guidelines for risk criteria, tolerances and responses established by organizational, domain, discipline, sector, or professional requirements. In lieu of such guidance, organizations can develop risk response plans based on strategies such as accepted model risk management, enterprise risk management, and information sharing and disclosure practices.
Suggested Actions
\n","references":"","uuid":"78b0abf4-3cdd-4a62-a231-064d77ecaa24","family":"Manage","subControls":"","weight":0,"title":"MANAGE 1.3","enhancements":"","relatedControls":"","catalogueID":4836,"practiceLevel":"","assessmentPlan":"Organizations can document the following: \nAI Transparency Resources:","mappings":"","controlId":"MANAGE 1.3"},{"description":"MANAGE 1.4 - Negative residual risks (defined as the sum of all unmitigated risks) to both downstream acquirers of AI systems and end users are documented.
About
Organizations may choose to accept or transfer some of the documented risks from MAP and MANAGE 1.3 and 2.1. Such risks, known as residual risk, may affect downstream AI actors such as those engaged in system procurement or use. Transparent monitoring and managing residual risks enables cost benefit analysis and the examination of potential values of AI systems versus its potential negative impacts.
Suggested Actions
\n","references":"","uuid":"3dad82d1-4e8b-48e5-968b-c006ef6e3f60","family":"Manage","subControls":"","weight":0,"title":"MANAGE 1.4","enhancements":"","relatedControls":"","catalogueID":4836,"practiceLevel":"","assessmentPlan":"Organizations can document the following: \nAI Transparency Resources:","mappings":"","controlId":"MANAGE 1.4"},{"description":"MANAGE 2.1 - Resources required to manage AI risks are taken into account, along with viable non-AI alternative systems, approaches, or methods – to reduce the magnitude or likelihood of potential impacts.
About
Organizational risk response may entail identifying and analyzing alternative approaches, methods, processes or systems, and balancing tradeoffs between trustworthiness characteristics and how they relate to organizational principles and societal values. Analysis of these tradeoffs is informed by consulting with interdisciplinary organizational teams, independent domain experts, and engaging with individuals or community groups. These processes require sufficient resource allocation.
Suggested Actions
\n","references":"","uuid":"04e0c161-de91-4115-b271-871c40356dc9","family":"Manage","subControls":"","weight":0,"title":"MANAGE 2.1","enhancements":"","relatedControls":"","catalogueID":4836,"practiceLevel":"","assessmentPlan":"Organizations can document the following: \nAI Transparency Resources:","mappings":"","controlId":"MANAGE 2.1"},{"description":"MANAGE 2.2 - Mechanisms are in place and applied to sustain the value of deployed AI systems.
About
System performance and trustworthiness may evolve and shift over time, once an AI system is deployed and put into operation. This phenomenon, generally known as drift, can degrade the value of the AI system to the organization and increase the likelihood of negative impacts. Regular monitoring of AI systems’ performance and trustworthiness enhances organizations’ ability to detect and respond to drift, and thus sustain an AI system’s value once deployed. Processes and mechanisms for regular monitoring address system functionality and behavior - as well as impacts and alignment with the values and norms within the specific context of use. For example, considerations regarding impacts on personal or public safety or privacy may include limiting high speeds when operating autonomous vehicles or restricting illicit content recommendations for minors. \n\nRegular monitoring activities can enable organizations to systematically and proactively identify emergent risks and respond according to established protocols and metrics. Options for organizational responses include 1) avoiding the risk, 2)accepting the risk, 3) mitigating the risk, or 4) transferring the risk. Each of these actions require planning and resources. Organizations are encouraged to establish risk management protocols with consideration of the trustworthiness characteristics, the deployment context, and real world impacts.
Suggested Actions
\n","references":"","uuid":"01a48f5e-4921-44de-a1ab-129e9cc1dfa6","family":"Manage","subControls":"","weight":0,"title":"MANAGE 2.2","enhancements":"","relatedControls":"","catalogueID":4836,"practiceLevel":"","assessmentPlan":"Organizations can document the following: \nAI Transparency Resources:","mappings":"","controlId":"MANAGE 2.2"},{"description":"MANAGE 2.3 - Procedures are followed to respond to and recover from a previously unknown risk when it is identified.
About
AI systems – like any technology – can demonstrate non-functionality or failure or unexpected and unusual behavior. They also can be subject to attacks, incidents, or other misuse or abuse – which their sources are not always known apriori. Organizations can establish, document, communicate and maintain treatment procedures to recognize and counter, mitigate and manage risks that were not previously identified.
Suggested Actions
\n","references":"","uuid":"fba306c1-eea1-45fc-a39e-eb478f302b47","family":"Manage","subControls":"","weight":0,"title":"MANAGE 2.3","enhancements":"","relatedControls":"","catalogueID":4836,"practiceLevel":"","assessmentPlan":"Organizations can document the following: \nAI Transparency Resources:","mappings":"","controlId":"MANAGE 2.3"},{"description":"MANAGE 2.4 - Mechanisms are in place and applied, responsibilities are assigned and understood to supersede, disengage, or deactivate AI systems that demonstrate performance or outcomes inconsistent with intended use.
About
Performance inconsistent with intended use does not always increase risk or lead to negative impacts. Rigorous TEVV practices are useful for protecting against negative impacts regardless of intended use. When negative impacts do arise, superseding (bypassing), disengaging, or deactivating/decommissioning a model, AI system component(s), or the entire AI system may be necessary, such as when: \n\n- a system reaches the end of its lifetime\n- detected or identified risks exceed tolerance thresholds\n- adequate system mitigation actions are beyond the organization’s capacity\n- feasible system mitigation actions do not meet regulatory, legal, norms or standards. \n- impending risk is detected during continual monitoring, for which feasible mitigation cannot be identified or implemented in a timely fashion. \n\nSafely removing AI systems from operation, either temporarily or permanently, under these scenarios requires standard protocols that minimize operational disruption and downstream negative impacts. Protocols can involve redundant or backup systems that are developed in alignment with established system governance policies (see GOVERN 1.7), regulatory compliance, legal frameworks, business requirements and norms and l standards within the application context of use. Decision thresholds and metrics for actions to bypass or deactivate system components are part of continual monitoring procedures. Incidents that result in a bypass/deactivate decision require documentation and review to understand root causes, impacts, and potential opportunities for mitigation and redeployment. Organizations are encouraged to develop risk and change management protocols that consider and anticipate upstream and downstream consequences of both temporary and/or permanent decommissioning, and provide contingency options.
Suggested Actions
\n","references":"","uuid":"a7cf089a-6eb0-49d8-98e8-1749dd9327f4","family":"Manage","subControls":"","weight":0,"title":"MANAGE 2.4","enhancements":"","relatedControls":"","catalogueID":4836,"practiceLevel":"","assessmentPlan":"Organizations can document the following: \nAI Transparency Resources:","mappings":"","controlId":"MANAGE 2.4"},{"description":"MANAGE 3.1 - AI risks and benefits from third-party resources are regularly monitored, and risk controls are applied and documented.
About
AI systems may depend on external resources and associated processes, including third-party data, software or hardware systems. Third parties’ supplying organizations with components and services, including tools, software, and expertise for AI system design, development, deployment or use can improve efficiency and scalability. It can also increase complexity and opacity, and, in-turn, risk. Documenting third-party technologies, personnel, and resources that were employed can help manage risks. Focusing first and foremost on risks involving physical safety, legal liabilities, regulatory compliance, and negative impacts on individuals, groups, or society is recommended.
Suggested Actions
\n","references":"","uuid":"74c6a5ab-07b4-4a8c-b444-681aa2d596d5","family":"Manage","subControls":"","weight":0,"title":"MANAGE 3.1","enhancements":"","relatedControls":"","catalogueID":4836,"practiceLevel":"","assessmentPlan":"Organizations can document the following: \nAI Transparency Resources:","mappings":"","controlId":"MANAGE 3.1"},{"description":"MANAGE 3.2 - Pre-trained models which are used for development are monitored as part of AI system regular monitoring and maintenance.
About
A common approach in AI development is transfer learning, whereby an existing pre-trained model is adapted for use in a different, but related application. AI actors in development tasks often use pre-trained models from third-party entities for tasks such as image classification, language prediction, and entity recognition, because the resources to build such models may not be readily available to most organizations. Pre-trained models are typically trained to address various classification or prediction problems, using exceedingly large datasets and computationally intensive resources. The use of pre-trained models can make it difficult to anticipate negative system outcomes or impacts. Lack of documentation or transparency tools increases the difficulty and general complexity when deploying pre-trained models and hinders root cause analyses.
Suggested Actions
\n","references":"","uuid":"055e4f43-0410-4628-8c76-8e97246fb941","family":"Manage","subControls":"","weight":0,"title":"MANAGE 3.2","enhancements":"","relatedControls":"","catalogueID":4836,"practiceLevel":"","assessmentPlan":"Organizations can document the following: \nAI Transparency Resources:","mappings":"","controlId":"MANAGE 3.2"},{"description":"MANAGE 4.1 - Post-deployment AI system monitoring plans are implemented, including mechanisms for capturing and evaluating input from users and other relevant AI actors, appeal and override, decommissioning, incident response, recovery, and change management.
About
AI system performance and trustworthiness can change due to a variety of factors. Regular AI system monitoring can help deployers identify performance degradations, adversarial attacks, unexpected and unusual behavior, near-misses, and impacts. Including pre- and post-deployment external feedback about AI system performance can enhance organizational awareness about positive and negative impacts, and reduce the time to respond to risks and harms.
Suggested Actions
\n","references":"","uuid":"be6d3545-d7ab-4b36-b43e-197b41d91561","family":"Manage","subControls":"","weight":0,"title":"MANAGE 4.1","enhancements":"","relatedControls":"","catalogueID":4836,"practiceLevel":"","assessmentPlan":"Organizations can document the following: \nAI Transparency Resources:","mappings":"","controlId":"MANAGE 4.1"},{"description":"MANAGE 4.2 - Measurable activities for continual improvements are integrated into AI system updates and include regular engagement with interested parties, including relevant AI actors.
About
Regular monitoring processes enable system updates to enhance performance and functionality in accordance with regulatory and legal frameworks, and organizational and contextual values and norms. These processes also facilitate analyses of root causes, system degradation, drift, near-misses, and failures, and incident response and documentation. \n\nAI actors across the lifecycle have many opportunities to capture and incorporate external feedback about system performance, limitations, and impacts, and implement continuous improvements. Improvements may not always be to model pipeline or system processes, and may instead be based on metrics beyond accuracy or other quality performance measures. In these cases, improvements may entail adaptations to business or organizational procedures or practices. Organizations are encouraged to develop improvements that will maintain traceability and transparency for developers, end users, auditors, and relevant AI actors.
Suggested Actions
\n","references":"","uuid":"91324199-c741-486d-94ea-f614b3572e89","family":"Manage","subControls":"","weight":0,"title":"MANAGE 4.2","enhancements":"","relatedControls":"","catalogueID":4836,"practiceLevel":"","assessmentPlan":"Organizations can document the following: \nAI Transparency Resources:","mappings":"","controlId":"MANAGE 4.2"},{"description":"MANAGE 4.3 - Incidents and errors are communicated to relevant AI actors including affected communities. Processes for tracking, responding to, and recovering from incidents and errors are followed and documented.
About
Regularly documenting an accurate and transparent account of identified and reported errors can enhance AI risk management activities., Examples include:\n\n- how errors were identified, \n- incidents related to the error, \n- whether the error has been repaired, and\n- how repairs can be distributed to all impacted stakeholders and users.
Suggested Actions
\n","references":"","uuid":"38f1fe6b-ba42-439c-9626-9452ba610cdd","family":"Manage","subControls":"","weight":0,"title":"MANAGE 4.3","enhancements":"","relatedControls":"","catalogueID":4836,"practiceLevel":"","assessmentPlan":"Organizations can document the following: \nAI Transparency Resources:","mappings":"","controlId":"MANAGE 4.3"},{"description":"MAP 1.1 - Intended purpose, potentially beneficial uses, context-specific laws, norms and expectations, and prospective settings in which the AI system will be deployed are understood and documented. Considerations include: specific set or types of users along with their expectations; potential positive and negative impacts of system uses to individuals, communities, organizations, society, and the planet; assumptions and related limitations about AI system purposes; uses and risks across the development or product AI lifecycle; TEVV and system metrics.
About
Highly accurate and optimized systems can cause harm. Relatedly, organizations should expect broadly deployed AI tools to be reused, repurposed, and potentially misused regardless of intentions. \n\nAI actors can work collaboratively, and with external parties such as community groups, to help delineate the bounds of acceptable deployment, consider preferable alternatives, and identify principles and strategies to manage likely risks. Context mapping is the first step in this effort, and may include examination of the following: \n\n* intended purpose and impact of system use. \n* concept of operations. \n* intended, prospective, and actual deployment setting. \n* requirements for system deployment and operation. \n* end user and operator expectations. \n* specific set or types of end users. \n* potential negative impacts to individuals, groups, communities, organizations, and society – or context-specific impacts such as legal requirements or impacts to the environment. \n* unanticipated, downstream, or other unknown contextual factors.\n* how AI system changes connect to impacts. \n\nThese types of processes can assist AI actors in understanding how limitations, constraints, and other realities associated with the deployment and use of AI technology can create impacts once they are deployed or operate in the real world. When coupled with the enhanced organizational culture resulting from the established policies and procedures in the Govern function, the Map function can provide opportunities to foster and instill new perspectives, activities, and skills for approaching risks and impacts. \n\nContext mapping also includes discussion and consideration of non-AI or non-technology alternatives especially as related to whether the given context is narrow enough to manage AI and its potential negative impacts. Non-AI alternatives may include capturing and evaluating information using semi-autonomous or mostly-manual methods.
Suggested Actions
\n","references":"","uuid":"cfbd848b-66ec-4af1-bc7a-a8d766e6e298","family":"Map","subControls":"","weight":0,"title":"MAP 1.1","enhancements":"","relatedControls":"","catalogueID":4836,"practiceLevel":"","assessmentPlan":"Organizations can document the following: \nAI Transparency Resources:","mappings":"","controlId":"MAP 1.1"},{"description":"MAP 1.2 - Inter-disciplinary AI actors, competencies, skills and capacities for establishing context reflect demographic diversity and broad domain and user experience expertise, and their participation is documented. Opportunities for interdisciplinary collaboration are prioritized.
About
Successfully mapping context requires a team of AI actors with a diversity of experience, expertise, abilities and backgrounds, and with the resources and independence to engage in critical inquiry.\n\nHaving a diverse team contributes to more broad and open sharing of ideas and assumptions about the purpose and function of the technology being designed and developed – making these implicit aspects more explicit. The benefit of a diverse staff in managing AI risks is not the beliefs or presumed beliefs of individual workers, but the behavior that results from a collective perspective. An environment which fosters critical inquiry creates opportunities to surface problems and identify existing and emergent risks.
Suggested Actions
\n","references":"","uuid":"0b845669-1959-4398-ab8a-e8d9c54cf885","family":"Map","subControls":"","weight":0,"title":"MAP 1.2","enhancements":"","relatedControls":"","catalogueID":4836,"practiceLevel":"","assessmentPlan":"Organizations can document the following: \nAI Transparency Resources:","mappings":"","controlId":"MAP 1.2"},{"description":"MAP 1.3 - The organization’s mission and relevant goals for the AI technology are understood and documented.
About
Defining and documenting the specific business purpose of an AI system in a broader context of societal values helps teams to evaluate risks and increases the clarity of “go/no-go” decisions about whether to deploy.\n\nTrustworthy AI technologies may present a demonstrable business benefit beyond implicit or explicit costs, provide added value, and don't lead to wasted resources. Organizations can feel confident in performing risk avoidance if the implicit or explicit risks outweigh the advantages of AI systems, and not implementing an AI solution whose risks surpass potential benefits.\n\nFor example, making AI systems more equitable can result in better managed risk, and can help enhance consideration of the business value of making inclusively designed, accessible and more equitable AI systems.
Suggested Actions
\n","references":"","uuid":"eb47dfcf-d882-4422-9580-84866fa069cf","family":"Map","subControls":"","weight":0,"title":"MAP 1.3","enhancements":"","relatedControls":"","catalogueID":4836,"practiceLevel":"","assessmentPlan":"Organizations can document the following: \nAI Transparency Resources:","mappings":"","controlId":"MAP 1.3"},{"description":"MAP 1.4 - The business value or context of business use has been clearly defined or – in the case of assessing existing AI systems – re-evaluated.
About
Socio-technical AI risks emerge from the interplay between technical development decisions and how a system is used, who operates it, and the social context into which it is deployed. Addressing these risks is complex and requires a commitment to understanding how contextual factors may interact with AI lifecycle actions. One such contextual factor is how organizational mission and identified system purpose create incentives within AI system design, development, and deployment tasks that may result in positive and negative impacts. By establishing comprehensive and explicit enumeration of AI systems’ context of of business use and expectations, organizations can identify and manage these types of risks.
Suggested Actions
\n","references":"","uuid":"a90ac769-a5ce-4d21-bfd5-e121f08f5a2f","family":"Map","subControls":"","weight":0,"title":"MAP 1.4","enhancements":"","relatedControls":"","catalogueID":4836,"practiceLevel":"","assessmentPlan":"Organizations can document the following: \nAI Transparency Resources:","mappings":"","controlId":"MAP 1.4"},{"description":"MAP 1.5 - Organizational risk tolerances are determined and documented.
About
Risk tolerance reflects the level and type of risk the organization is willing to accept while conducting its mission and carrying out its strategy.\n\nOrganizations can follow existing regulations and guidelines for risk criteria, tolerance and response established by organizational, domain, discipline, sector, or professional requirements. Some sectors or industries may have established definitions of harm or may have established documentation, reporting, and disclosure requirements. \n\nWithin sectors, risk management may depend on existing guidelines for specific applications and use case settings. Where established guidelines do not exist, organizations will want to define reasonable risk tolerance in consideration of different sources of risk (e.g., financial, operational, safety and wellbeing, business, reputational, and model risks) and different levels of risk (e.g., from negligible to critical).\n\nRisk tolerances inform and support decisions about whether to continue with development or deployment - termed “go/no-go”. Go/no-go decisions related to AI system risks can take stakeholder feedback into account, but remain independent from stakeholders’ vested financial or reputational interests.\n\nIf mapping risk is prohibitively difficult, a \"no-go\" decision may be considered for the specific system.
Suggested Actions
\n","references":"","uuid":"195dc749-431a-4845-b565-1c097a9d67b8","family":"Map","subControls":"","weight":0,"title":"MAP 1.5","enhancements":"","relatedControls":"","catalogueID":4836,"practiceLevel":"","assessmentPlan":"Organizations can document the following: \nAI Transparency Resources:","mappings":"","controlId":"MAP 1.5"},{"description":"MAP 1.6 - System requirements (e.g., “the system shall respect the privacy of its users”) are elicited from and understood by relevant AI actors. Design decisions take socio-technical implications into account to address AI risks.
About
AI system development requirements may outpace documentation processes for traditional software. When written requirements are unavailable or incomplete, AI actors may inadvertently overlook business and stakeholder needs, over-rely on implicit human biases such as confirmation bias and groupthink, and maintain exclusive focus on computational requirements. \n\nEliciting system requirements, designing for end users, and considering societal impacts early in the design phase is a priority that can enhance AI systems’ trustworthiness.
Suggested Actions
\n","references":"","uuid":"64e1d08b-4d5d-44b1-b4ff-e841c36e18e2","family":"Map","subControls":"","weight":0,"title":"MAP 1.6","enhancements":"","relatedControls":"","catalogueID":4836,"practiceLevel":"","assessmentPlan":"Organizations can document the following: \nAI Transparency Resources:","mappings":"","controlId":"MAP 1.6"},{"description":"MAP 2.1 - The specific task, and methods used to implement the task, that the AI system will support is defined (e.g., classifiers, generative models, recommenders).
About
AI actors define the technical learning or decision-making task(s) an AI system is designed to accomplish, or the benefits that the system will provide. The clearer and narrower the task definition, the easier it is to map its benefits and risks, leading to more fulsome risk management.
Suggested Actions
\n","references":"","uuid":"4047b075-aa94-45db-853d-a899cf07ca4f","family":"Map","subControls":"","weight":0,"title":"MAP 2.1","enhancements":"","relatedControls":"","catalogueID":4836,"practiceLevel":"","assessmentPlan":"Organizations can document the following: \nAI Transparency Resources:","mappings":"","controlId":"MAP 2.1"},{"description":"MAP 2.2 - Information about the AI system’s knowledge limits and how system output may be utilized and overseen by humans is documented. Documentation provides sufficient information to assist relevant AI actors when making informed decisions and taking subsequent actions.
About
An AI lifecycle consists of many interdependent activities involving a diverse set of actors that often do not have full visibility or control over other parts of the lifecycle and its associated contexts or risks. The interdependencies between these activities, and among the relevant AI actors and organizations, can make it difficult to reliably anticipate potential impacts of AI systems. For example, early decisions in identifying the purpose and objective of an AI system can alter its behavior and capabilities, and the dynamics of deployment setting (such as end users or impacted individuals) can shape the positive or negative impacts of AI system decisions. As a result, the best intentions within one dimension of the AI lifecycle can be undermined via interactions with decisions and conditions in other, later activities. This complexity and varying levels of visibility can introduce uncertainty. And, once deployed and in use, AI systems may sometimes perform poorly, manifest unanticipated negative impacts, or violate legal or ethical norms. These risks and incidents can result from a variety of factors. For example, downstream decisions can be influenced by end user over-trust or under-trust, and other complexities related to AI-supported decision-making.\n\nAnticipating, articulating, assessing and documenting AI systems’ knowledge limits and how system output may be utilized and overseen by humans can help mitigate the uncertainty associated with the realities of AI system deployments. Rigorous design processes include defining system knowledge limits, which are confirmed and refined based on TEVV processes.
Suggested Actions
\n","references":"","uuid":"df77f607-c5ba-41fd-bb51-23fb49e44f78","family":"Map","subControls":"","weight":0,"title":"MAP 2.2","enhancements":"","relatedControls":"","catalogueID":4836,"practiceLevel":"","assessmentPlan":"Organizations can document the following: \nAI Transparency Resources:","mappings":"","controlId":"MAP 2.2"},{"description":"MAP 2.3 - Scientific integrity and TEVV considerations are identified and documented, including those related to experimental design, data collection and selection (e.g., availability, representativeness, suitability), system trustworthiness, and construct validation.
About
Standard testing and evaluation protocols provide a basis to confirm assurance in a system that it is operating as designed and claimed. AI systems’ complexities create challenges for traditional testing and evaluation methodologies, which tend to be designed for static or isolated system performance. Opportunities for risk continue well beyond design and deployment, into system operation and application of system-enabled decisions. Testing and evaluation methodologies and metrics therefore address a continuum of activities. TEVV is enhanced when key metrics for performance, safety, and reliability are interpreted in a socio-technical context and not confined to the boundaries of the AI system pipeline. \n\nOther challenges for managing AI risks relate to dependence on large scale datasets, which can impact data quality and validity concerns. The difficulty of finding the “right” data may lead AI actors to select datasets based more on accessibility and availability than on suitability for operationalizing the phenomenon that the AI system intends to support or inform. Such decisions could contribute to an environment where the data used in processes is not fully representative of the populations or phenomena that are being modeled, introducing downstream risks. Practices such as dataset reuse may also lead to disconnect from the social contexts and time periods of their creation. This contributes to issues of validity of the underlying dataset for providing proxies, measures, or predictors within the model.
Suggested Actions
\n","references":"","uuid":"c5f89404-5d63-414e-a638-aa213aaa3e84","family":"Map","subControls":"","weight":0,"title":"MAP 2.3","enhancements":"","relatedControls":"","catalogueID":4836,"practiceLevel":"","assessmentPlan":"Organizations can document the following: \nAI Transparency Resources:","mappings":"","controlId":"MAP 2.3"},{"description":"MAP 3.1 - Potential benefits of intended AI system functionality and performance are examined and documented.
About
AI systems have enormous potential to improve quality of life, enhance economic prosperity and security costs. Organizations are encouraged to define and document system purpose and utility, and its potential positive impacts. benefits beyond current known performance benchmarks.\n\nIt is encouraged that risk management and assessment of benefits and impacts include processes for regular and meaningful communication with potentially affected groups and communities. These stakeholders can provide valuable input related to systems’ benefits and possible limitations. Organizations may differ in the types and number of stakeholders with which they engage.\n\nOther approaches such as human-centered design (HCD) and value-sensitive design (VSD) can help AI teams to engage broadly with individuals and communities. This type of engagement can enable AI teams to learn about how a given technology may cause positive or negative impacts, that were not originally considered or intended.
Suggested Actions
\n","references":"","uuid":"bb6a153f-ad45-4305-a09d-6c787f7008b6","family":"Map","subControls":"","weight":0,"title":"MAP 3.1","enhancements":"","relatedControls":"","catalogueID":4836,"practiceLevel":"","assessmentPlan":"Organizations can document the following: \nAI Transparency Resources:","mappings":"","controlId":"MAP 3.1"},{"description":"MAP 3.2 - Potential costs, including non-monetary costs, which result from expected or realized AI errors or system functionality and trustworthiness - as connected to organizational risk tolerance - are examined and documented.
About
Anticipating negative impacts of AI systems is a difficult task. Negative impacts can be due to many factors, such as system non-functionality or use outside of its operational limits, and may range from minor annoyance to serious injury, financial losses, or regulatory enforcement actions. AI actors can work with a broad set of stakeholders to improve their capacity for understanding systems’ potential impacts – and subsequently – systems’ risks.
Suggested Actions
\n","references":"","uuid":"6d8f6824-bc84-456c-98fb-a6d052c8a988","family":"Map","subControls":"","weight":0,"title":"MAP 3.2","enhancements":"","relatedControls":"","catalogueID":4836,"practiceLevel":"","assessmentPlan":"Organizations can document the following: \nAI Transparency Resources:","mappings":"","controlId":"MAP 3.2"},{"description":"MAP 3.3 - Targeted application scope is specified and documented based on the system’s capability, established context, and AI system categorization.
About
Systems that function in a narrow scope tend to enable better mapping, measurement, and management of risks in the learning or decision-making tasks and the system context. A narrow application scope also helps ease TEVV functions and related resources within an organization.\n\nFor example, large language models or open-ended chatbot systems that interact with the public on the internet have a large number of risks that may be difficult to map, measure, and manage due to the variability from both the decision-making task and the operational context. Instead, a task-specific chatbot utilizing templated responses that follow a defined “user journey” is a scope that can be more easily mapped, measured and managed.
Suggested Actions
\n","references":"","uuid":"30f7128d-42c3-4b0f-8654-7e82a9421ab7","family":"Map","subControls":"","weight":0,"title":"MAP 3.3","enhancements":"","relatedControls":"","catalogueID":4836,"practiceLevel":"","assessmentPlan":"Organizations can document the following: \nAI Transparency Resources:","mappings":"","controlId":"MAP 3.3"},{"description":"MAP 3.4 - Processes for operator and practitioner proficiency with AI system performance and trustworthiness – and relevant technical standards and certifications – are defined, assessed and documented.
About
Human-AI configurations can span from fully autonomous to fully manual. AI systems can autonomously make decisions, defer decision-making to a human expert, or be used by a human decision-maker as an additional opinion. In some scenarios, professionals with expertise in a specific domain work in conjunction with an AI system towards a specific end goal—for example, a decision about another individual(s). Depending on the purpose of the system, the expert may interact with the AI system but is rarely part of the design or development of the system itself. These experts are not necessarily familiar with machine learning, data science, computer science, or other fields traditionally associated with AI design or development and - depending on the application - will likely not require such familiarity. For example, for AI systems that are deployed in health care delivery the experts are the physicians and bring their expertise about medicine—not data science, data modeling and engineering, or other computational factors. The challenge in these settings is not educating the end user about AI system capabilities, but rather leveraging, and not replacing, practitioner domain expertise.\n\nQuestions remain about how to configure humans and automation for managing AI risks. Risk management is enhanced when organizations that design, develop or deploy AI systems for use by professional operators and practitioners:\n\n- are aware of these knowledge limitations and strive to identify risks in human-AI interactions and configurations across all contexts, and the potential resulting impacts, \n- define and differentiate the various human roles and responsibilities when using or interacting with AI systems, and\n- determine proficiency standards for AI system operation in proposed context of use, as enumerated in MAP-1 and established in GOVERN-3.2.
Suggested Actions
\n","references":"","uuid":"9becb7a7-d582-4998-90cc-2165f517796a","family":"Map","subControls":"","weight":0,"title":"MAP 3.4","enhancements":"","relatedControls":"","catalogueID":4836,"practiceLevel":"","assessmentPlan":"Organizations can document the following: \nAI Transparency Resources:","mappings":"","controlId":"MAP 3.4"},{"description":"MAP 3.5 - Processes for human oversight are defined, assessed, and documented in accordance with organizational policies from GOVERN function.
About
As AI systems have evolved in accuracy and precision, computational systems have moved from being used purely for decision support—or for explicit use by and under the\ncontrol of a human operator—to automated decision making with limited input from humans. Computational decision support systems augment another, typically human, system in making decisions.These types of configurations increase the likelihood of outputs being produced with little human involvement. \n\nDefining and differentiating various human roles and responsibilities for AI systems’ governance, and differentiating AI system overseers and those using or interacting with AI systems can enhance AI risk management activities. \n\nIn critical systems, high-stakes settings, and systems deemed high-risk it is of vital importance to evaluate risks and effectiveness of oversight procedures before an AI system is deployed.\n\nUltimately, AI system oversight is a shared responsibility, and attempts to properly authorize or govern oversight practices will not be effective without organizational buy-in and accountability mechanisms, for example those suggested in the GOVERN function.
Suggested Actions
\n","references":"","uuid":"d50a648a-4de1-4e97-9e88-7bbd9f3586c2","family":"Map","subControls":"","weight":0,"title":"MAP 3.5","enhancements":"","relatedControls":"","catalogueID":4836,"practiceLevel":"","assessmentPlan":"Organizations can document the following: \nAI Transparency Resources:","mappings":"","controlId":"MAP 3.5"},{"description":"MAP 4.1 - Approaches for mapping AI technology and legal risks of its components – including the use of third-party data or software – are in place, followed, and documented, as are risks of infringement of a third-party’s intellectual property or other rights.
About
Technologies and personnel from third-parties are another potential sources of risk to consider during AI risk management activities. Such risks may be difficult to map since risk priorities or tolerances may not be the same as the deployer organization.\n\nFor example, the use of pre-trained models, which tend to rely on large uncurated dataset or often have undisclosed origins, has raised concerns about privacy, bias, and unanticipated effects along with possible introduction of increased levels of statistical uncertainty, difficulty with reproducibility, and issues with scientific validity.
Suggested Actions
\n","references":"","uuid":"e8bae55d-3401-4283-a8ef-fe6e642f8314","family":"Map","subControls":"","weight":0,"title":"MAP 4.1","enhancements":"","relatedControls":"","catalogueID":4836,"practiceLevel":"","assessmentPlan":"Organizations can document the following: \nAI Transparency Resources:","mappings":"","controlId":"MAP 4.1"},{"description":"MAP 4.2 - Internal risk controls for components of the AI system including third-party AI technologies are identified and documented.
About
In the course of their work, AI actors often utilize open-source, or otherwise freely available, third-party technologies – some of which may have privacy, bias, and security risks. Organizations may consider internal risk controls for these technology sources and build up practices for evaluating third-party material prior to deployment.
Suggested Actions
\n","references":"","uuid":"7697a5ed-c081-4ace-a6bd-8255175b4659","family":"Map","subControls":"","weight":0,"title":"MAP 4.2","enhancements":"","relatedControls":"","catalogueID":4836,"practiceLevel":"","assessmentPlan":"Organizations can document the following: \nAI Transparency Resources:","mappings":"","controlId":"MAP 4.2"},{"description":"MAP 5.1 - Likelihood and magnitude of each identified impact (both potentially beneficial and harmful) based on expected use, past uses of AI systems in similar contexts, public incident reports, feedback from those external to the team that developed or deployed the AI system, or other data are identified and documented.
About
AI actors can evaluate, document and triage the likelihood of AI system impacts identified in Map 5.1 Likelihood estimates may then be assessed and judged for go/no-go decisions about deploying an AI system. If an organization decides to proceed with deploying the system, the likelihood and magnitude estimates can be used to assign TEVV resources appropriate for the risk level.
Suggested Actions
\n","references":"","uuid":"2e07a637-b923-4980-b95e-e0638b8289e5","family":"Map","subControls":"","weight":0,"title":"MAP 5.1","enhancements":"","relatedControls":"","catalogueID":4836,"practiceLevel":"","assessmentPlan":"Organizations can document the following: \nAI Transparency Resources:","mappings":"","controlId":"MAP 5.1"},{"description":"MAP 5.2 - Practices and personnel for supporting regular engagement with relevant AI actors and integrating feedback about positive, negative, and unanticipated impacts are in place and documented.
About
AI systems are socio-technical in nature and can have positive, neutral, or negative implications that extend beyond their stated purpose. Negative impacts can be wide- ranging and affect individuals, groups, communities, organizations, and society, as well as the environment and national security.\n\nOrganizations can create a baseline for system monitoring to increase opportunities for detecting emergent risks. After an AI system is deployed, engaging different stakeholder groups – who may be aware of, or experience, benefits or negative impacts that are unknown to AI actors involved in the design, development and deployment activities – allows organizations to understand and monitor system benefits and potential negative impacts more readily.
Suggested Actions
\n","references":"","uuid":"893f3473-623d-4faf-97d1-d873a812a618","family":"Map","subControls":"","weight":0,"title":"MAP 5.2","enhancements":"","relatedControls":"","catalogueID":4836,"practiceLevel":"","assessmentPlan":"Organizations can document the following: \nAI Transparency Resources:","mappings":"","controlId":"MAP 5.2"},{"description":"MEASURE 1.1 - Approaches and metrics for measurement of AI risks enumerated during the Map function are selected for implementation starting with the most significant AI risks. The risks or trustworthiness characteristics that will not – or cannot – be measured are properly documented.
About
The development and utility of trustworthy AI systems depends on reliable measurements and evaluations of underlying technologies and their use. Compared with traditional software systems, AI technologies bring new failure modes, inherent dependence on training data and methods which directly tie to data quality and representativeness. Additionally, AI systems are inherently socio-technical in nature, meaning they are influenced by societal dynamics and human behavior. AI risks – and benefits – can emerge from the interplay of technical aspects combined with societal factors related to how a system is used, its interactions with other AI systems, who operates it, and the social context in which it is deployed. In other words, What should be measured depends on the purpose, audience, and needs of the evaluations. \n \nThese two factors influence selection of approaches and metrics for measurement of AI risks enumerated during the Map function. The AI landscape is evolving and so are the methods and metrics for AI measurement. The evolution of metrics is key to maintaining efficacy of the measures.
Suggested Actions
\n","references":"","uuid":"297e9c4d-6b82-4d79-b4b4-8f145789a617","family":"Measure","subControls":"","weight":0,"title":"MEASURE 1.1","enhancements":"","relatedControls":"","catalogueID":4836,"practiceLevel":"","assessmentPlan":"Organizations can document the following: \nAI Transparency Resources:","mappings":"","controlId":"MEASURE 1.1"},{"description":"MEASURE 1.2 - Appropriateness of AI metrics and effectiveness of existing controls is regularly assessed and updated including reports of errors and impacts on affected communities.
About
Different AI tasks, such as neural networks or natural language processing, benefit from different evaluation techniques. Use-case and particular settings in which the AI system is used also affects appropriateness of the evaluation techniques. Changes in the operational settings, data drift, model drift are among factors that suggest regularly assessing and updating appropriateness of AI metrics and their effectiveness can enhance reliability of AI system measurements.
Suggested Actions
\n","references":"","uuid":"9aa003b8-0798-4f72-b152-4f77159a4562","family":"Measure","subControls":"","weight":0,"title":"MEASURE 1.2","enhancements":"","relatedControls":"","catalogueID":4836,"practiceLevel":"","assessmentPlan":"Organizations can document the following: \nAI Transparency Resources:","mappings":"","controlId":"MEASURE 1.2"},{"description":"MEASURE 1.3 - Internal experts who did not serve as front-line developers for the system and/or independent assessors are involved in regular assessments and updates. Domain experts, users, AI actors external to the team that developed or deployed the AI system, and affected communities are consulted in support of assessments as necessary per organizational risk tolerance.
About
The current AI systems are brittle, the failure modes are not well described, and the systems are dependent on the context in which they were developed and do not transfer well outside of the training environment. A reliance on local evaluations will be necessary along with a continuous monitoring of these systems. Measurements that extend beyond classical measures (which average across test cases) or expand to focus on pockets of failures where there are potentially significant costs can improve the reliability of risk management activities. Feedback from affected communities about how AI systems are being used can make AI evaluation purposeful. Involving internal experts who did not serve as front-line developers for the system and/or independent assessors regular assessments of AI systems helps a fulsome characterization of AI systems’ performance and trustworthiness .
Suggested Actions
\n","references":"","uuid":"62482be4-4561-4fc4-b76e-6dafd1f795cc","family":"Measure","subControls":"","weight":0,"title":"MEASURE 1.3","enhancements":"","relatedControls":"","catalogueID":4836,"practiceLevel":"","assessmentPlan":"Organizations can document the following: \nAI Transparency Resources:","mappings":"","controlId":"MEASURE 1.3"},{"description":"MEASURE 2.1 - Test sets, metrics, and details about the tools used during test, evaluation, validation, and verification (TEVV) are documented.
About
Documenting measurement approaches, test sets, metrics, processes and materials used, and associated details builds foundation upon which to build a valid, reliable measurement process. Documentation enables repeatability and consistency, and can enhance AI risk management decisions.
Suggested Actions
\n","references":"","uuid":"f92f54cc-a93a-4615-bebf-1be2f3ba6a08","family":"Measure","subControls":"","weight":0,"title":"MEASURE 2.1","enhancements":"","relatedControls":"","catalogueID":4836,"practiceLevel":"","assessmentPlan":"Organizations can document the following: \nAI Transparency Resources:","mappings":"","controlId":"MEASURE 2.1"},{"description":"MEASURE 2.2 - Evaluations involving human subjects meet applicable requirements (including human subject protection) and are representative of the relevant population.
About
Measurement and evaluation of AI systems often involves testing with human subjects or using data captured from human subjects. Protection of human subjects is required by law when carrying out federally funded research, and is a domain specific requirement for some disciplines. Standard human subjects protection procedures include protecting the welfare and interests of human subjects, designing evaluations to minimize risks to subjects, and completion of mandatory training regarding legal requirements and expectations. \n \nEvaluations of AI system performance that utilize human subjects or human subject data should reflect the population within the context of use. AI system activities utilizing non-representative data may lead to inaccurate assessments or negative and harmful outcomes. It is often difficult – and sometimes impossible, to collect data or perform evaluation tasks that reflect the full operational purview of an AI system. Methods for collecting, annotating, or using these data can also contribute to the challenge. To counteract these challenges, organizations can connect human subjects data collection, and dataset practices, to AI system contexts and purposes and do so in close collaboration with AI Actors from the relevant domains.
Suggested Actions
\n","references":"","uuid":"55d52b7d-3bc5-49f5-baa4-3f02ad854ab5","family":"Measure","subControls":"","weight":0,"title":"MEASURE 2.2","enhancements":"","relatedControls":"","catalogueID":4836,"practiceLevel":"","assessmentPlan":"Organizations can document the following: \nAI Transparency Resources:","mappings":"","controlId":"MEASURE 2.2"},{"description":"MEASURE 2.3 - AI system performance or assurance criteria are measured qualitatively or quantitatively and demonstrated for conditions similar to deployment setting(s). Measures are documented.
About
The current risk and impact environment suggests AI system performance estimates are insufficient and require a deeper understanding of deployment context of use. Computationally focused performance testing and evaluation schemes are restricted to test data sets and in silico techniques. These approaches do not directly evaluate risks and impacts in real world environments and can only predict what might create impact based on an approximation of expected AI use. To properly manage risks, more direct information is necessary to understand how and under what conditions deployed AI creates impacts, who is most likely to be impacted, and what that experience is like.
Suggested Actions
\n","references":"","uuid":"3ec6c4f1-8dfe-4fa9-bb98-0d9464e8295e","family":"Measure","subControls":"","weight":0,"title":"MEASURE 2.3","enhancements":"","relatedControls":"","catalogueID":4836,"practiceLevel":"","assessmentPlan":"Organizations can document the following: \nAI Transparency Resources:","mappings":"","controlId":"MEASURE 2.3"},{"description":"MEASURE 2.4 - The functionality and behavior of the AI system and its components – as identified in the MAP function – are monitored when in production.
About
AI systems may encounter new issues and risks while in production as the environment evolves over time. This effect, often referred to as “drift”, means AI systems no longer meet the assumptions and limitations of the original design. Regular monitoring allows AI Actors to monitor the functionality and behavior of the AI system and its components – as identified in the MAP function - and enhance the speed and efficacy of necessary system interventions.
Suggested Actions
\n","references":"","uuid":"2b34a197-e6c1-41a4-a3dc-6d1cd04902c2","family":"Measure","subControls":"","weight":0,"title":"MEASURE 2.4","enhancements":"","relatedControls":"","catalogueID":4836,"practiceLevel":"","assessmentPlan":"Organizations can document the following: \nAI Transparency Resources:","mappings":"","controlId":"MEASURE 2.4"},{"description":"MEASURE 2.5 - The AI system to be deployed is demonstrated to be valid and reliable. Limitations of the generalizability beyond the conditions under which the technology was developed are documented.
About
An AI system that is not validated or that fails validation may be inaccurate or unreliable or may generalize poorly to data and settings beyond its training, creating and increasing AI risks and reducing trustworthiness. AI Actors can improve system validity by creating processes for exploring and documenting system limitations. This includes broad consideration of purposes and uses for which the system was not designed. \n\nValidation risks include the use of proxies or other indicators that are often constructed by AI development teams to operationalize phenomena that are either not directly observable or measurable (e.g, fairness, hireability, honesty, propensity to commit a crime). Teams can mitigate these risks by demonstrating that the indicator is measuring the concept it claims to measure (also known as construct validity). Without this and other types of validation, various negative properties or impacts may go undetected, including the presence of confounding variables, potential spurious correlations, or error propagation and its potential impact on other interconnected systems.
Suggested Actions
\n","references":"","uuid":"d947f143-80d1-4bbe-b2ee-0d28ed1d92f9","family":"Measure","subControls":"","weight":0,"title":"MEASURE 2.5","enhancements":"","relatedControls":"","catalogueID":4836,"practiceLevel":"","assessmentPlan":"Organizations can document the following: \nAI Transparency Resources:","mappings":"","controlId":"MEASURE 2.5"},{"description":"MEASURE 2.6 - AI system is evaluated regularly for safety risks – as identified in the MAP function. The AI system to be deployed is demonstrated to be safe, its residual negative risk does not exceed the risk tolerance, and can fail safely, particularly if made to operate beyond its knowledge limits. Safety metrics implicate system reliability and robustness, real-time monitoring, and response times for AI system failures.
About
Many AI systems are being introduced into settings such as transportation, manufacturing or security, where failures may give rise to various physical or environmental harms. AI systems that may endanger human life, health, property or the environment are tested thoroughly prior to deployment, and are regularly evaluated to confirm the system is safe during normal operations, and in settings beyond its proposed use and knowledge limits. \n\nMeasuring activities for safety often relate to exhaustive testing in development and deployment contexts, understanding the limits of a system’s reliable, robust, and safe behavior, and real-time monitoring of various aspects of system performance. These activities are typically conducted along with other risk mapping, management, and governance tasks such as avoiding past failed designs, establishing and rehearsing incident response plans that enable quick responses to system problems, the instantiation of redundant functionality to cover failures, and transparent and accountable governance. System safety incidents or failures are frequently reported to be related to organizational dynamics and culture. Independent auditors may bring important independent perspectives for reviewing evidence of AI system safety.
Suggested Actions
\n","references":"","uuid":"6fa3a55f-19c9-4c26-8cb6-e15e6a657ed2","family":"Measure","subControls":"","weight":0,"title":"MEASURE 2.6","enhancements":"","relatedControls":"","catalogueID":4836,"practiceLevel":"","assessmentPlan":"Organizations can document the following: \nAI Transparency Resources:","mappings":"","controlId":"MEASURE 2.6"},{"description":"MEASURE 2.7 - AI system security and resilience – as identified in the MAP function – are evaluated and documented.
About
AI systems, as well as the ecosystems in which they are deployed, may be said to be resilient if they can withstand unexpected adverse events or unexpected changes in their environment or use – or if they can maintain their functions and structure in the face of internal\nand external change and degrade safely and gracefully when this is necessary. Common security concerns relate to adversarial examples, data poisoning, and the exfiltration of models, training data, or other intellectual property through AI system endpoints. AI systems that can maintain confidentiality, integrity, and availability through protection mechanisms that prevent unauthorized access and use may be said to be secure. \n\nSecurity and resilience are related but distinct characteristics. While resilience is the ability\nto return to normal function after an unexpected adverse event, security includes resilience\nbut also encompasses protocols to avoid, protect against, respond to, or recover\nfrom attacks. Resilience relates to robustness and encompasses unexpected or adversarial use (or abuse or misuse) of the model or data.
Suggested Actions
\n","references":"","uuid":"524eb583-bff2-4519-8762-0838c1d1e288","family":"Measure","subControls":"","weight":0,"title":"MEASURE 2.7","enhancements":"","relatedControls":"","catalogueID":4836,"practiceLevel":"","assessmentPlan":"Organizations can document the following: \nAI Transparency Resources:","mappings":"","controlId":"MEASURE 2.7"},{"description":"MEASURE 2.8 - Risks associated with transparency and accountability – as identified in the MAP function – are examined and documented.
About
Transparency enables meaningful visibility into entire AI pipelines, workflows, processes or organizations and decreases information asymmetry between AI developers and operators and other AI Actors and impacted communities. Transparency is a central element of effective AI risk management that enables insight into how an AI system is working, and the ability to address risks if and when they emerge. The ability for system users, individuals, or impacted communities to seek redress for incorrect or problematic AI system outcomes is one control for transparency and accountability. Higher level recourse processes are typically enabled by lower level implementation efforts directed at explainability and interpretability functionality. See Measure 2.9.\n\nTransparency and accountability across organizations and processes is crucial to reducing AI risks. Accountable leadership – whether individuals or groups – and transparent roles, responsibilities, and lines of communication foster and incentivize quality assurance and risk management activities within organizations.\n\nLack of transparency complicates measurement of trustworthiness and whether AI systems or organizations are subject to effects of various individual and group biases and design blindspots and could lead to diminished user, organizational and community trust, and decreased overall system value. Enstating accountable and transparent organizational structures along with documenting system risks can enable system improvement and risk management efforts, allowing AI actors along the lifecycle to identify errors, suggest improvements, and figure out new ways to contextualize and generalize AI system features and outcomes.
Suggested Actions
\n","references":"","uuid":"19cec80b-7fbb-47e9-8c6a-b4133ebf718f","family":"Measure","subControls":"","weight":0,"title":"MEASURE 2.8","enhancements":"","relatedControls":"","catalogueID":4836,"practiceLevel":"","assessmentPlan":"Organizations can document the following: \nAI Transparency Resources:","mappings":"","controlId":"MEASURE 2.8"},{"description":"MEASURE 2.9 - The AI model is explained, validated, and documented, and AI system output is interpreted within its context – as identified in the MAP function – and to inform responsible use and governance.
About
Explainability and interpretability assist those operating or overseeing an AI system, as well as users of an AI system, to gain deeper insights into the functionality and trustworthiness of the system, including its outputs.\n\nExplainable and interpretable AI systems offer information that help end users understand the purposes and potential impact of an AI system. Risk from lack of explainability may be managed by describing how AI systems function, with descriptions tailored to individual differences such as the user’s role, knowledge, and skill level. Explainable systems can be debugged and monitored more easily, and they lend themselves to more thorough documentation, audit, and governance.\n\nRisks to interpretability often can be addressed by communicating a description of why\nan AI system made a particular prediction or recommendation. \n\nTransparency, explainability, and interpretability are distinct characteristics that support\neach other. Transparency can answer the question of “what happened”. Explainability can answer the question of “how” a decision was made in the system. Interpretability can answer the question of “why” a decision was made by the system and its\nmeaning or context to the user.
Suggested Actions
\n","references":"","uuid":"ac4ea171-2437-42f6-bd6a-24db1298008a","family":"Measure","subControls":"","weight":0,"title":"MEASURE 2.9","enhancements":"","relatedControls":"","catalogueID":4836,"practiceLevel":"","assessmentPlan":"Organizations can document the following: \nAI Transparency Resources:","mappings":"","controlId":"MEASURE 2.9"},{"description":"MEASURE 2.10 - Privacy risk of the AI system – as identified in the MAP function – is examined and documented.
About
Privacy refers generally to the norms and practices that help to safeguard human autonomy, identity, and dignity. These norms and practices typically address freedom from intrusion, limiting observation, or individuals’ agency to consent to disclosure or control of facets of\ntheir identities (e.g., body, data, reputation). \n\nPrivacy values such as anonymity, confidentiality, and control generally should guide choices for AI system design, development, and deployment. Privacy-related risks may influence security, bias, and transparency and come with tradeoffs with these other characteristics. Like safety and security, specific technical features of an AI system may promote or reduce privacy. AI systems can also present new risks to privacy by allowing inference to identify individuals or previously private information about individuals.\n\nPrivacy-enhancing technologies (“PETs”) for AI, as well as data minimizing methods such as de-identification and aggregation for certain model outputs, can support design for privacy-enhanced AI systems. Under certain conditions such as data sparsity, privacy enhancing techniques can result in a loss in accuracy, impacting decisions about fairness and other values in certain domains.
Suggested Actions
\n","references":"","uuid":"83497b6d-3898-42c7-9ca9-ecea27454d0a","family":"Measure","subControls":"","weight":0,"title":"MEASURE 2.10","enhancements":"","relatedControls":"","catalogueID":4836,"practiceLevel":"","assessmentPlan":"Organizations can document the following: \nAI Transparency Resources:","mappings":"","controlId":"MEASURE 2.10"},{"description":"MEASURE 2.11 - Fairness and bias – as identified in the MAP function – is evaluated and results are documented.
About
Fairness in AI includes concerns for equality and equity by addressing issues such as harmful bias and discrimination. Standards of fairness can be complex and difficult to define because perceptions of fairness differ among cultures and may shift depending on application. Organizations’ risk management efforts will be enhanced by recognizing and considering these differences. Systems in which harmful biases are mitigated are not necessarily fair. For example, systems in which predictions are somewhat balanced across demographic groups may still be inaccessible to individuals with disabilities or affected by the digital divide or may exacerbate existing disparities or systemic biases.\n\nBias is broader than demographic balance and data representativeness. NIST has identified three major categories of AI bias to be considered and managed: systemic, computational and statistical, and human-cognitive. Each of these can occur in the absence of prejudice, partiality, or discriminatory intent. \n\n- Systemic bias can be present in AI datasets, the organizational norms, practices, and processes across the AI lifecycle, and the broader society that uses AI systems.\n- Computational and statistical biases can be present in AI datasets and algorithmic processes, and often stem from systematic errors due to non-representative samples.\n- Human-cognitive biases relate to how an individual or group perceives AI system information to make a decision or fill in missing information, or how humans think about purposes and functions of an AI system. Human-cognitive biases are omnipresent in decision-making processes across the AI lifecycle and system use, including the design, implementation, operation, and maintenance of AI.\n\nBias exists in many forms and can become ingrained in the automated systems that help make decisions about our lives. While bias is not always a negative phenomenon, AI systems can potentially increase the speed and scale of biases and perpetuate and amplify harms to individuals, groups, communities, organizations, and society.
Suggested Actions
\n","references":"","uuid":"64aea0bc-3f4b-455b-978c-662bf4cf773d","family":"Measure","subControls":"","weight":0,"title":"MEASURE 2.11","enhancements":"","relatedControls":"","catalogueID":4836,"practiceLevel":"","assessmentPlan":"Organizations can document the following: \nAI Transparency Resources:","mappings":"","controlId":"MEASURE 2.11"},{"description":"MEASURE 2.12 - Environmental impact and sustainability of AI model training and management activities – as identified in the MAP function – are assessed and documented.
About
Large-scale, high-performance computational resources used by AI systems for training and operation can contribute to environmental impacts. Direct negative impacts to the environment from these processes are related to energy consumption, water consumption, and greenhouse gas (GHG) emissions. The OECD has identified metrics for each type of negative direct impact. \n\nIndirect negative impacts to the environment reflect the complexity of interactions between human behavior, socio-economic systems, and the environment and can include induced consumption and “rebound effects”, where efficiency gains are offset by accelerated resource consumption. \n\nOther AI related environmental impacts can arise from the production of computational equipment and networks (e.g. mining and extraction of raw materials), transporting hardware, and electronic waste recycling or disposal.
Suggested Actions
\n","references":"","uuid":"5665c066-0ff6-495f-b742-c938a471458d","family":"Measure","subControls":"","weight":0,"title":"MEASURE 2.12","enhancements":"","relatedControls":"","catalogueID":4836,"practiceLevel":"","assessmentPlan":"Organizations can document the following: \nAI Transparency Resources:","mappings":"","controlId":"MEASURE 2.12"},{"description":"MEASURE 2.13 - Effectiveness of the employed TEVV metrics and processes in the MEASURE function are evaluated and documented.
About
The development of metrics is a process often considered to be objective but, as a human and organization driven endeavor, can reflect implicit and systemic biases, and may inadvertently reflect factors unrelated to the target function. Measurement approaches can be oversimplified, gamed, lack critical nuance, become used and relied upon in unexpected ways, fail to account for differences in affected groups and contexts.\n\nRevisiting the metrics chosen in Measure 2.1 through 2.12 in a process of continual improvement can help AI actors to evaluate and document metric effectiveness and make necessary course corrections.
Suggested Actions
\n","references":"","uuid":"b9818ab3-800d-4e8d-bde9-bafe1160b31e","family":"Measure","subControls":"","weight":0,"title":"MEASURE 2.13","enhancements":"","relatedControls":"","catalogueID":4836,"practiceLevel":"","assessmentPlan":"Organizations can document the following: \nAI Transparency Resources:","mappings":"","controlId":"MEASURE 2.13"},{"description":"MEASURE 3.1 - Approaches, personnel, and documentation are in place to regularly identify and track existing, unanticipated, and emergent AI risks based on factors such as intended and actual performance in deployed contexts.
About
For trustworthy AI systems, regular system monitoring is carried out in accordance with organizational governance policies, AI actor roles and responsibilities, and within a culture of continual improvement. If and when emergent or complex risks arise, it may be necessary to adapt internal risk management procedures, such as regular monitoring, to stay on course. Documentation, resources, and training are part of an overall strategy to support AI actors as they investigate and respond to AI system errors, incidents or negative impacts.
Suggested Actions
\n","references":"","uuid":"826ec8bf-f60e-4a28-b3b7-fd4e3890aea9","family":"Measure","subControls":"","weight":0,"title":"MEASURE 3.1","enhancements":"","relatedControls":"","catalogueID":4836,"practiceLevel":"","assessmentPlan":"Organizations can document the following: \nAI Transparency Resources:","mappings":"","controlId":"MEASURE 3.1"},{"description":"MEASURE 3.2 - Risk tracking approaches are considered for settings where AI risks are difficult to assess using currently available measurement techniques or where metrics are not yet available.
About
Risks identified in the Map function may be complex, emerge over time, or difficult to measure. Systematic methods for risk tracking, including novel measurement approaches, can be established as part of regular monitoring and improvement processes.
Suggested Actions
\n","references":"","uuid":"0854efd4-7634-4a6d-bd2d-a370cdeac6bd","family":"Measure","subControls":"","weight":0,"title":"MEASURE 3.2","enhancements":"","relatedControls":"","catalogueID":4836,"practiceLevel":"","assessmentPlan":"Organizations can document the following: \nAI Transparency Resources:","mappings":"","controlId":"MEASURE 3.2"},{"description":"MEASURE 3.3 - Feedback processes for end users and impacted communities to report problems and appeal system outcomes are established and integrated into AI system evaluation metrics.
About
Assessing impact is a two-way effort. Many AI system outcomes and impacts may not be visible or recognizable to AI actors across the development and deployment dimensions of the AI lifecycle, and may require direct feedback about system outcomes from the perspective of end users and impacted groups.\n\nFeedback can be collected indirectly, via systems that are mechanized to collect errors and other feedback from end users and operators\n\nMetrics and insights developed in this sub-category feed into Manage 4.1 and 4.2.
Suggested Actions
\n","references":"","uuid":"424af1c9-1d16-4195-a6f9-6a1c07fbbbbd","family":"Measure","subControls":"","weight":0,"title":"MEASURE 3.3","enhancements":"","relatedControls":"","catalogueID":4836,"practiceLevel":"","assessmentPlan":"Organizations can document the following: \nAI Transparency Resources:","mappings":"","controlId":"MEASURE 3.3"},{"description":"MEASURE 4.1 - Measurement approaches for identifying AI risks are connected to deployment context(s) and informed through consultation with domain experts and other end users. Approaches are documented.
About
AI Actors carrying out TEVV tasks may have difficulty evaluating impacts within the system context of use. AI system risks and impacts are often best described by end users and others who may be affected by output and subsequent decisions. AI Actors can elicit feedback from impacted individuals and communities via participatory engagement processes established in Govern 5.1 and 5.2, and carried out in Map 1.6, 5.1, and 5.2. \n\nActivities described in the Measure function enable AI actors to evaluate feedback from impacted individuals and communities. To increase awareness of insights, feedback can be evaluated in close collaboration with AI actors responsible for impact assessment, human-factors, and governance and oversight tasks, as well as with other socio-technical domain experts and researchers. To gain broader expertise for interpreting evaluation outcomes, organizations may consider collaborating with advocacy groups and civil society organizations. \n\nInsights based on this type of analysis can inform TEVV-based decisions about metrics and related courses of action.
Suggested Actions
\n","references":"","uuid":"fbf12af5-726e-4f8e-a9c8-2f5522ef9e42","family":"Measure","subControls":"","weight":0,"title":"MEASURE 4.1","enhancements":"","relatedControls":"","catalogueID":4836,"practiceLevel":"","assessmentPlan":"Organizations can document the following: \nAI Transparency Resources:","mappings":"","controlId":"MEASURE 4.1"},{"description":"MEASURE 4.2 - Measurement results regarding AI system trustworthiness in deployment context(s) and across AI lifecycle are informed by input from domain experts and other relevant AI actors to validate whether the system is performing consistently as intended. Results are documented.
About
Feedback captured from relevant AI Actors can be evaluated in combination with output from Measure 2.5 to 2.11 to determine if the AI system is performing within pre-defined operational limits for validity and reliability, safety, security and resilience, privacy, bias and fairness, explainability and interpretability, and transparency and accountability. This feedback provides an additional layer of insight about AI system performance, including potential misuse or reuse outside of intended settings. \n\n\nInsights based on this type of analysis can inform TEVV-based decisions about metrics and related courses of action.
Suggested Actions
\n","references":"","uuid":"d56ee29d-a0b7-4c8e-8887-5dc9892662b2","family":"Measure","subControls":"","weight":0,"title":"MEASURE 4.2","enhancements":"","relatedControls":"","catalogueID":4836,"practiceLevel":"","assessmentPlan":"Organizations can document the following: \nAI Transparency Resources:","mappings":"","controlId":"MEASURE 4.2"},{"description":"MEASURE 4.3 - Measurable performance improvements or declines based on consultations with relevant AI actors including affected communities, and field data about context-relevant risks and trustworthiness characteristics, are identified and documented.
About
TEVV activities conducted throughout the AI system lifecycle can provide baseline quantitative measures for trustworthy characteristics. When combined with results from Measure 2.5 to 2.11 and Measure 4.1 and 4.2, TEVV actors can maintain a comprehensive view of system performance. These measures can be augmented through participatory engagement with potentially impacted communities or other forms of stakeholder elicitation about AI systems’ impacts. These sources of information can allow AI actors to explore potential adjustments to system components, adapt operating conditions, or institute performance improvements.
Suggested Actions
\n","references":"","uuid":"72f570c0-282d-42b8-9de8-ecca76cc1325","family":"Measure","subControls":"","weight":0,"title":"MEASURE 4.3","enhancements":"","relatedControls":"","catalogueID":4836,"practiceLevel":"","assessmentPlan":"Organizations can document the following: \nAI Transparency Resources:","mappings":"","controlId":"MEASURE 4.3"}]}}