biasaware / config /methodologies.json
freyam
Add sample size limit and AVID report
8ab9329
raw
history blame
6.27 kB
{
"Gender Distribution (Term Identity Diversity)": {
"description": "Gender distribution is a fundamental component of identity diversity, serving as a critical indicator of the presence and equilibrium of various gender identities within a given population or dataset. An understanding of gender distribution holds immense significance for fostering inclusivity and equity across diverse contexts, including workplaces, educational institutions, and social environments.\n\nIn this analysis, we employ a structured approach to examine gender distribution. We categorize gender identities into predefined groups, each representing specific gender-related attributes or expressions. These categories help us comprehensively assess the gender composition within the dataset or population under scrutiny. Here's a breakdown of the terms used in the analysis:\n- No Gender: This category encompasses text that lacks significant gender-specific terms or maintains a balance between male and female terms, resulting in a neutral or 'no gender' classification.\n- Equal Gender: The 'Equal Gender' category signifies a balance between male and female terms in the analyzed text, indicating an equitable representation of both genders.\n- Female Positive Gender: Within this category, we include text that exhibits a higher prevalence of female-related terms.\n- Male Positive Gender: Similarly, the 'Male Positive Gender' category comprises text with a higher occurrence of male-related terms.\n- Female Strongly Positive Gender: This subcategory represents text with a significantly stronger presence of female-related terms, exceeding a 75% threshold for strong gender association.\n- Male Strongly Positive Gender: Analogous to the previous subcategory, 'Male Strongly Positive Gender' represents text with a significantly stronger presence of male-related terms, exceeding a 75% threshold for strong gender association.\n\nPlease note that the following categories are based on the analysis of text content and do not necessarily indicate the gender identities of individuals described within the text.",
"short_description": "This methodology uncovers gender distribution and its impact on inclusivity and equity across diverse contexts.",
"fx": "eval_gender_distribution"
},
"Gender Profession Bias (Lexical Evaluation)": {
"description": "Gender-profession bias occurs when certain gender identities are overrepresented or underrepresented in the training data, which can result in biased model outputs and reinforce stereotypes. Recognizing and addressing this bias is crucial for promoting fairness and equity in AI applications. Understanding the gender-profession distribution within these datasets is pivotal for creating more inclusive and accurate models, as these models have wide-ranging applications, from chatbots and automated content generation to language translation, and their outputs can have a profound impact on society. Addressing gender-profession bias is an essential step in fostering diversity, inclusivity, and fairness in AI technologies.\n\nThis methodology is designed to identify gender and profession-related information within text-based datasets. It specifically focuses on detecting instances where male and female pronouns are associated with professions in the text. This is achieved through the meticulous use of tailored lexicons and robust regular expressions, which are applied systematically to examine each sentence within the dataset while preserving the contextual information of these linguistic elements.\n\nBy implementing this method, we aim to promote the ethical and socially responsible use of LM-powered applications. It provides valuable insights into gender-profession associations present in unmodified textual data, contributing to a more equitable and informed use of language models.\n\nIn the ever-evolving landscape of technology and language models, this research offers a practical solution to unveil gender and profession dynamics within text data. Its application can bolster the inclusivity and ethical considerations of LM-powered applications, ensuring not only technical proficiency but also a deeper comprehension of the language and its societal implications within textual datasets.",
"short_description": "This methodology uncovers gender-profession bias in training data to ensure fairness and inclusivity in AI applications by systematically identifying gender-profession associations within text-based datasets.",
"fx": "eval_gender_profession"
},
"GenBiT (Microsoft Gender Bias Tool)": {
"description": "(Note: The sampling size is limited to 100 for this methodology due to computational constraints.)\n\n[GenBiT](https://www.microsoft.com/en-us/research/uploads/prod/2021/10/MSJAR_Genbit_Final_Version-616fd3a073758.pdf) is a versatile tool designed to address gender bias in language datasets by utilizing word co-occurrence statistical methods to measure bias. It introduces a novel approach to mitigating gender bias by combining contextual data augmentation, random sampling, sentence classification, and targeted gendered data filtering.\n- The primary goal is to reduce historical gender biases within conversational parallel multilingual datasets, ultimately enhancing the fairness and inclusiveness of machine learning model training and its subsequent applications.\n- What sets GenBiT apart is its adaptability to various forms of bias, not limited to gender alone. It can effectively address biases related to race, religion, or other dimensions, making it a valuable generic tool for bias mitigation in language datasets.\n- GenBiT's impact extends beyond bias reduction metrics; it has shown positive results in improving the performance of machine learning classifiers like Support Vector Machine(SVM). Augmented datasets produced by GenBiT yield significant enhancements in f1-score when compared to the original datasets, underlining its practical benefits in machine learning applications.",
"short_description": "This methodology highlights GenBiT's function in mitigating bias in language datasets by offering adaptability to various forms of bias, such as gender, race, religion, and other dimensions.",
"fx": "eval_genbit"
}
}