backnotprop tmccoy14 commited on
Commit
b855b46
1 Parent(s): 809c27b

Update frameworks/sample/controls.json (#3)

Browse files

- Update frameworks/sample/controls.json (a8036f2d76824e0208be4de86e09b663defa5c57)


Co-authored-by: Tucker McCoy <tmccoy14@users.noreply.huggingface.co>

Files changed (1) hide show
  1. frameworks/sample/controls.json +3 -3
frameworks/sample/controls.json CHANGED
@@ -6,7 +6,7 @@
6
  "description": "The organization shall conduct comprehensive assessments to identify and mitigate potential biases in the data used to train AI systems. The following measures shall be implemented:\n\na. Bias Assessment Methodology: Establish a documented methodology for assessing biases in training data, including the specific types of biases to be evaluated (e.g., representational bias, sample bias, historical bias) and the techniques used to identify them (e.g., statistical analysis, fairness metrics).\n\nb. Bias Assessment Frequency: Conduct data bias assessments at regular intervals, such as prior to the initial use of the data for training, whenever significant changes are made to the data, and at least annually.\n\nc. Bias Assessment Reporting: Document the results of data bias assessments, including identified biases, their potential impact on AI system outcomes, and recommended mitigation strategies.\n\nd. Bias Mitigation Planning: Develop and maintain a bias mitigation plan that outlines the specific actions to be taken to address identified biases in the training data. This may include techniques such as data resampling, data augmentation, or the use of bias mitigation algorithms.\n\ne. Bias Mitigation Implementation: Implement the bias mitigation plan and document the actions taken to reduce or eliminate identified biases in the training data.\n\nf. Ongoing Monitoring: Establish processes for ongoing monitoring of AI system outcomes to detect and respond to any emergent biases that may arise over time.",
7
  "controlCategory": "Data Bias",
8
  "readableControlId": "AIDBA-1",
9
- "severity": "medium",
10
  "automationPlatforms": [],
11
  "criteria": [
12
  {
@@ -47,7 +47,7 @@
47
  "description": "The organization shall ensure that data collection and preprocessing steps are designed to minimize the introduction of biases and ensure data quality. The following measures shall be implemented:\n\na. Data Source Selection: Identify and select diverse and representative data sources to reduce the risk of biases arising from limited or skewed data.\n\nb. Data Sampling Techniques: Employ appropriate data sampling techniques, such as stratified sampling or oversampling, to ensure balanced representation of different groups or classes in the training data.\n\nc. Data Quality Checks: Implement data quality checks to identify and address issues such as missing values, outliers, inconsistencies, and errors in the collected data.\n\nd. Data Preprocessing Guidelines: Establish and follow guidelines for data preprocessing tasks, including data cleaning, normalization, and feature selection, to maintain data integrity and reduce the introduction of biases.\n\ne. Data Labeling and Annotation: Ensure that data labeling and annotation processes are performed consistently and objectively, with clear guidelines and quality control measures to minimize the introduction of biases.\n\nf. Data Documentation: Maintain comprehensive documentation of data collection and preprocessing steps, including data sources, sampling methods, preprocessing techniques, and any assumptions made during the process.",
48
  "controlCategory": "Data Bias",
49
  "readableControlId": "AIDBA-2",
50
- "severity": "medium",
51
  "automationPlatforms": [],
52
  "criteria": [
53
  {
@@ -170,7 +170,7 @@
170
  "description": "The organization shall ensure that the deployed AI system is continuously monitored for any emerging biases or fairness issues. The following measures shall be implemented:\n\na. Monitoring Plan: Establish a comprehensive monitoring plan that outlines the key metrics, data sources, and frequency of monitoring for the deployed AI system. The plan should cover both performance and fairness aspects of the system.\n\nb. Monitoring Mechanisms: Implement automated monitoring mechanisms to continuously collect and analyze data from the deployed AI system. These mechanisms should be designed to detect any deviations from the expected performance or fairness metrics.\n\nc. Fairness Drift Detection: Monitor for fairness drift, which refers to the gradual degradation of the AI system's fairness properties over time. Implement techniques to detect and quantify fairness drift, such as statistical tests or comparison with baseline fairness metrics.\n\nd. Bias Incident Response: Establish processes and protocols for promptly addressing any biases or fairness issues identified during monitoring. This includes conducting root cause analysis, developing mitigation strategies, and implementing necessary updates or adjustments to the AI system.\n\ne. Monitoring Reporting: Generate regular monitoring reports that provide insights into the AI system's performance and fairness metrics. These reports should be reviewed by relevant stakeholders and used to inform decision-making and continuous improvement efforts.\n\nf. Stakeholder Feedback: Establish channels for collecting and incorporating feedback from stakeholders, including users, customers, and impacted communities. Regularly solicit feedback on the AI system's fairness, transparency, and accountability, and use this feedback to guide monitoring and improvement efforts.",
171
  "controlCategory": "Data Bias",
172
  "readableControlId": "AIDBA-5",
173
- "severity": "medium",
174
  "automationPlatforms": [],
175
  "criteria": [
176
  {
 
6
  "description": "The organization shall conduct comprehensive assessments to identify and mitigate potential biases in the data used to train AI systems. The following measures shall be implemented:\n\na. Bias Assessment Methodology: Establish a documented methodology for assessing biases in training data, including the specific types of biases to be evaluated (e.g., representational bias, sample bias, historical bias) and the techniques used to identify them (e.g., statistical analysis, fairness metrics).\n\nb. Bias Assessment Frequency: Conduct data bias assessments at regular intervals, such as prior to the initial use of the data for training, whenever significant changes are made to the data, and at least annually.\n\nc. Bias Assessment Reporting: Document the results of data bias assessments, including identified biases, their potential impact on AI system outcomes, and recommended mitigation strategies.\n\nd. Bias Mitigation Planning: Develop and maintain a bias mitigation plan that outlines the specific actions to be taken to address identified biases in the training data. This may include techniques such as data resampling, data augmentation, or the use of bias mitigation algorithms.\n\ne. Bias Mitigation Implementation: Implement the bias mitigation plan and document the actions taken to reduce or eliminate identified biases in the training data.\n\nf. Ongoing Monitoring: Establish processes for ongoing monitoring of AI system outcomes to detect and respond to any emergent biases that may arise over time.",
7
  "controlCategory": "Data Bias",
8
  "readableControlId": "AIDBA-1",
9
+ "severity": "high",
10
  "automationPlatforms": [],
11
  "criteria": [
12
  {
 
47
  "description": "The organization shall ensure that data collection and preprocessing steps are designed to minimize the introduction of biases and ensure data quality. The following measures shall be implemented:\n\na. Data Source Selection: Identify and select diverse and representative data sources to reduce the risk of biases arising from limited or skewed data.\n\nb. Data Sampling Techniques: Employ appropriate data sampling techniques, such as stratified sampling or oversampling, to ensure balanced representation of different groups or classes in the training data.\n\nc. Data Quality Checks: Implement data quality checks to identify and address issues such as missing values, outliers, inconsistencies, and errors in the collected data.\n\nd. Data Preprocessing Guidelines: Establish and follow guidelines for data preprocessing tasks, including data cleaning, normalization, and feature selection, to maintain data integrity and reduce the introduction of biases.\n\ne. Data Labeling and Annotation: Ensure that data labeling and annotation processes are performed consistently and objectively, with clear guidelines and quality control measures to minimize the introduction of biases.\n\nf. Data Documentation: Maintain comprehensive documentation of data collection and preprocessing steps, including data sources, sampling methods, preprocessing techniques, and any assumptions made during the process.",
48
  "controlCategory": "Data Bias",
49
  "readableControlId": "AIDBA-2",
50
+ "severity": "high",
51
  "automationPlatforms": [],
52
  "criteria": [
53
  {
 
170
  "description": "The organization shall ensure that the deployed AI system is continuously monitored for any emerging biases or fairness issues. The following measures shall be implemented:\n\na. Monitoring Plan: Establish a comprehensive monitoring plan that outlines the key metrics, data sources, and frequency of monitoring for the deployed AI system. The plan should cover both performance and fairness aspects of the system.\n\nb. Monitoring Mechanisms: Implement automated monitoring mechanisms to continuously collect and analyze data from the deployed AI system. These mechanisms should be designed to detect any deviations from the expected performance or fairness metrics.\n\nc. Fairness Drift Detection: Monitor for fairness drift, which refers to the gradual degradation of the AI system's fairness properties over time. Implement techniques to detect and quantify fairness drift, such as statistical tests or comparison with baseline fairness metrics.\n\nd. Bias Incident Response: Establish processes and protocols for promptly addressing any biases or fairness issues identified during monitoring. This includes conducting root cause analysis, developing mitigation strategies, and implementing necessary updates or adjustments to the AI system.\n\ne. Monitoring Reporting: Generate regular monitoring reports that provide insights into the AI system's performance and fairness metrics. These reports should be reviewed by relevant stakeholders and used to inform decision-making and continuous improvement efforts.\n\nf. Stakeholder Feedback: Establish channels for collecting and incorporating feedback from stakeholders, including users, customers, and impacted communities. Regularly solicit feedback on the AI system's fairness, transparency, and accountability, and use this feedback to guide monitoring and improvement efforts.",
171
  "controlCategory": "Data Bias",
172
  "readableControlId": "AIDBA-5",
173
+ "severity": "low",
174
  "automationPlatforms": [],
175
  "criteria": [
176
  {