diff --git "a/20240921/2404.04838v2.json" "b/20240921/2404.04838v2.json" new file mode 100644--- /dev/null +++ "b/20240921/2404.04838v2.json" @@ -0,0 +1,669 @@ +{ + "title": "Data Bias According to Bipol: Men are Naturally Right and It is the Role of Women to Follow Their Lead", + "abstract": "We introduce new large labeled datasets on bias in 3 languages and show in experiments that bias exists in all 10 datasets of 5 languages evaluated, including benchmark datasets on the English GLUE/SuperGLUE leaderboards.\nThe 3 new languages give a total of almost 6 million labeled samples and we benchmark on these datasets using SotA multilingual pretrained models: mT5 and mBERT.\nThe challenge of social bias, based on prejudice, is ubiquitous, as recent events with AI and large language models (LLMs)have shown.\nMotivated by this challenge, we set out to estimate bias in multiple datasets.\nWe compare some recent bias metrics and use bipol, which has explainability in the metric.\nWe also confirm the unverified assumption that bias exists in toxic comments by randomly sampling 200 samples from a toxic dataset population using the confidence level of 95% and error margin of 7%.\nThirty gold samples were randomly distributed in the 200 samples to secure the quality of the annotation.\nOur findings confirm that many of the datasets have male bias (prejudice against women), besides other types of bias.\nWe publicly release our new datasets, lexica, models, and codes.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "The problem of social bias in data is a pressing one.\nRecent news about social bias of artificial intelligence (AI)systems, such as Alexa111bbc.com/news/technology-66508514 and ChatGPT,222bloomberg.com/news/newsletters/2022-12-08/chatgpt-open-ai-s-chatbot-is-spitting-out-biased-sexist-results shows that the age-old problem persists with data, which is used to train machine learning (ML)models.\nSocial bias is the inclination or prejudice for, or against, a person, group or idea, especially in a way that is considered to be unfair, which may be based on race, religion or other factors (Bellamy et al., 2018 ###reference_b6###; Antoniak and Mimno, 2021 ###reference_b4###; Mehrabi et al., 2021 ###reference_b37###; Alkhaled et al., 2023 ###reference_b3###).\nIt can also involve stereotypes that generalize behavior to groups (Brownstein, 2019 ###reference_b10###).\nIt can unfairly skew the output of ML models (Klare et al., 2012 ###reference_b28###; Raji et al., 2020 ###reference_b42###).\nLanguages with fewer resources than English are also affected (Rescigno et al., 2020 ###reference_b45###; Ch\u00e1vez Mulsa and Spanakis, 2020 ###reference_b13###; Kurpicz-Briki, 2020 ###reference_b32###).\nFor example, in Italian, the female gender is under-represented due to the phenomena such as the \u201cinclusive masculine\" (when the masculine is over-extended to denote groups of both male and female referents)\n(Luccioli et al., ###reference_b34###; Vanmassenhove and Monti, 2021 ###reference_b53###).\nIn this work, we are motivated to address the research question of how much bias exists in the text data of multiple languages, if at all bias exists in them?\nWe particularly investigate 6 benchmark datasets on the English GLUE/SuperGLUE leaderboards (Wang et al., 2018 ###reference_b55###, 2019 ###reference_b54###) and one dataset each for the other 4 languages: Italian, Dutch, German, and Swedish.\nFirst, we train SotA multilingual Text-to-Text Transfer Transformer (mT5)(Xue et al., 2021 ###reference_b60###) and multilingual Bidirectional Encoder Representations\nfrom Transformers (mBERT) models for bias classification on the multi-axes bias dataset (MAB)for each language, in a similar setup as Alkhaled et al. (2023 ###reference_b3###).\nFor the evaluations, we search through the literature to compare different metrics or evaluation methods as shown in\nTable 1 ###reference_### and discussed in Section 2 ###reference_###.\nThis motivates our choice of bipol, the multi-axes bias metric, which we then compare in experiments with a lexica baseline method.\nIn addition, to confirm the unverified assumption that toxic comments contain bias (Sap et al., 2020 ###reference_b49###; Alkhaled et al., 2023 ###reference_b3###), we annotate 200 randomly-selected samples from the training set of the English MAB." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "Literature Review", + "text": "Although English usually gets more support and attention in the literature, there have been attempts at measuring and mitigating bias in other languages.\nTesting for the presence of bias in Italian often has a contrastive perspective with English, with a focus on gender bias (Gaido et al., 2021 ###reference_b22###; Rescigno et al., 2020 ###reference_b45###).\nMuST-SHE (Bentivogli et al., 2020 ###reference_b7###) and gENder-IT (Vanmassenhove and Monti, 2021 ###reference_b53###)\nare examples of gender bias evaluation sets.\nGoing beyond gender bias, Kurpicz-Briki and Leoni (2021 ###reference_b33###) and Huang et al. (2020 ###reference_b26###) also identified biases related to people\u2019s origin and speakers\u2019 age.\nIt is essential to remember that the mentioned biases can be vehicles for misogynous and hateful discourse (El Abassi and Nisioi, 2020 ###reference_b19###; Attanasio et al., 2022 ###reference_b5###; Merenda et al., 2018 ###reference_b38###).\nBias studies for Dutch mostly consider binary gender bias.\nCh\u00e1vez Mulsa and Spanakis (2020 ###reference_b13###) investigate gender bias in Dutch static and contextualized word embeddings by creating Dutch versions of the Word/Sentence Embedding Association Test (WEAT/SEAT) (Caliskan et al., 2017 ###reference_b11###; May et al., 2019 ###reference_b35###).\nWEAT measures bias in word embeddings and can be limited in scope, in addition to having sensitivity to seed words.\nMcCurdy and Serbetci (2020 ###reference_b36###) perform a similar evaluation in a multilingual setup to compare the effect of grammatical gender saliency across languages. Several works use different NLP techniques to evaluate bias in corpora of Dutch news articles (Wevers, 2019 ###reference_b57###; Kroon et al., 2020 ###reference_b30###; Kroon and van der Meer, 2021 ###reference_b31###; Fokkens et al., 2018 ###reference_b21###) and literary texts (Koolen and van Cranenburgh, 2017 ###reference_b29###).\nIn Kurpicz-Briki (2020 ###reference_b32###), bias is measured with regards to place of origin and gender in German word embeddings using WEAT (Caliskan et al., 2017 ###reference_b11###).\nIn Kurpicz-Briki and Leoni (2021 ###reference_b33###), an automatic bias detection method (BiasWords) is presented, through which new biased word sets can be identified by exploring the vector space around the well-known word sets that show bias.\nIn the template-based study of Cho et al. (2021 ###reference_b14###), on gender bias in translations, the accuracy of gender inference was measured for multiple languages including German.\nIt was found that, particularly for German, the inference accuracy and disparate impact were lower for female than male, implying that certain translations were wrongly performed for cases that required female inference.\nSince German is a grammatically gendered, morphologically rich language, Gonen and Goldberg (2019 ###reference_b23###) found that debiasing methods of Bolukbasi et al. (2016 ###reference_b9###) were ineffective on german word embeddings.\nFor Swedish, the main focus of bias research appears to be on gender.\nSahlgren and Olsson (2019 ###reference_b47###) show with their experiments that gender bias is present in pretrained Swedish language models. Katsarou et al. (2022 ###reference_b27###) and Precenth (2019 ###reference_b41###)\nfound that the male gender tends to be associated with higher-status professions. A study with data from mainstream news corpora by Devinney et al. (2020 ###reference_b17###) shows that women are associated with concepts like family, communication and relationships." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "Bipol", + "text": "For the purpose of this work, we summarize bipol here but details are discussed in Alkhaled et al. (2023 ###reference_b3###).\nThe bipol metric uses a two-step mechanism for estimating bias in text data: binary classification and sensitive term evaluation using lexica.\nIt has maximum and minimum values of 1 and 0, respectively.\nBipol is expressed in Equations 1b ###reference_2### and 1c ###reference_3### from the main Equation 1a ###reference_1###, where is the classification component and is the sensitive term evaluation component.\nIn step 1, a trained model is used to classify all the samples.\nThe ratio of the biased samples to the total samples predicted is determined.\nThe tp, fp, tn, and fn are values of the true positives, false positives, true negatives, and false negatives, respectively.\nSince there\u2019s hardly a perfect classifier, the positive error rate is usually reported.\nFalse positives are known to exist in similar classification systems like spam detection and automatic hate speech detection (Heron, 2009 ###reference_b25###; Feng et al., 2018 ###reference_b20###).\nStep 2 is similar to term frequency-inverse document frequency (TF-IDF) in that it is based on term frequency (Salton and Buckley, 1988 ###reference_b48###; Ramos et al., 2003 ###reference_b44###),\nBiased samples from step 1 are evaluated token-wise along all possible bias axes, using all the lexica of sensitive terms.\nAn axis is a domain such as gender or race.\nTables 2 ###reference_### and 3 ###reference_### provide the lexica sizes.\nFor English and Swedish, we use the same lexica released by Alkhaled et al. (2023 ###reference_b3###) and Adewumi et al. (2023b ###reference_b2###), respectively.\nFor the other 3 languages, we create new lexica of terms (e.g. she & her) associated with specific gender or stereotypes from public sources.444fluentu.com/blog/italian/italian-nouns, en.wiktionary.org/wiki/Category:Italian_offensive_terms, Dutch_profanity, Category:German_ethnic_slurs\nSome of the terms in the lexica were selected from the sources based on the topmost available.\nThese may also be expanded as needed, since bias terms are known to evolve (Haemmerlie and Montgomery, 1991 ###reference_b24###; Antoniak and Mimno, 2021 ###reference_b4###).\nThe non-English lexica are small because fewer terms are usually available in other languages compared to the high-resource English language and we use the same size across the languages to be able to compare performance somewhat.\nThe Appendix lists these terms.\nEquation 1c ###reference_3### first finds the absolute difference between the two maximum summed frequencies in the types of an axis (), where n and m are the total terms in a sentence along an axis. For example, in the sentence \u00b4Women!!! PERSON taught you better than that. Shame on you!\u2019, female terms = 1 while male terms = 0.\nThis is then divided by the summed frequencies of all the terms () in that axis ().\nThe operation is performed for all axes () and the average taken ().\nIt is performed for all the biased samples () and the average taken ( )." + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "Methodology", + "text": "" + }, + { + "section_id": "4.1", + "parent_section_id": "4", + "section_name": "Datasets", + "text": "Culture-specific biases may not be represented in the MAB versions for the translated languages because the original dataset is in English.\nThis is a limitation.\nHowever, bias is also a universal concern, such that there are examples that span across cultures.\nFor instance, the examples in Table 4 ###reference_### are of universal concern because individuals with non-conforming bodies and women should be respected, regardless of culture or nationality.\nHence, the MAB versions are relevant for bias detection, though they were translated." + }, + { + "section_id": "4.1.1", + "parent_section_id": "4.1", + "section_name": "4.1.1 MAB", + "text": "###table_1### The Italian, Dutch and German datasets were machine-translated from MAB 555The reference provides details of the annotation of the base data. with the high-quality Helsinki-NLP model (Tiedemann and Thottingal, 2020 ###reference_b52###).\nEach translation took about 48 hours on one GPU.\nExamples from the data are provided in Table 4 ###reference_###.\nTable 5 ###reference_### provides statistics about the datasets.\nFor quality control (QC), we verified translation by back-translating some random samples using Google NMT.\nPersonal identifiable information (PII) was removed from the MAB dataset using the spaCy library.\nThe 3 datasets are used to train new bias classifiers.\nWe also train on the original English and the Swedish.\nCulture-specific biases may not be represented in the MAB versions for the translated languages because the original dataset is in English.\nThis is a limitation.\nHowever, bias is also a universal concern, such that there are examples that span across cultures.\nFor instance, the examples in Table 4 ###reference_### ###reference_### are of universal concern because individuals with non-conforming bodies and women should be respected, regardless of culture or nationality.\nHence, the MAB versions are relevant for bias detection, though they were translated." + }, + { + "section_id": "4.1.2", + "parent_section_id": "4.1", + "section_name": "4.1.2 Evaluation datasets", + "text": "Ten datasets are evaluated for bias in this work.\nAll are automatically preprocessed before evaluation, the same way the training data were preprocessed.\nThis includes removal of IP addresses, emojis, URLs, special characters, emails, extra spaces, numbers, empty text rows, and duplicate rows.\nAll texts are then lowercased.\nWe selected datasets that are available on the HuggingFace (Wolf et al., 2020 ###reference_b59###) Datasets.\nWe evaluated the first 1,000 samples of each training split due to resource constraints.\nThe understanding is that if bias is detected in these samples, then scaling over the entire dataset means there\u2019s probability of more bias.\nFor English, we evaluated the sentence column of Corpus of Linguistic Acceptability (CoLA)(Warstadt et al., 2019 ###reference_b56###), the sentence column of Question-Answering Natural Language Inference (QNLI)(Wang et al., 2018 ###reference_b55###), the sentennce1 column of Microsoft Research Paraphrase Corpus (MRPC)(Dolan and Brockett, 2005 ###reference_b18###), the premise column of Multi-Genre Natural Language Inference (MNLI)(Williams et al., 2018 ###reference_b58###), the premise column of the CommitmentBank (CB)dataset (De Marneffe et al., 2019 ###reference_b16###), and the passage column of Reading Comprehension with Commonsense Reasoning Dataset (ReCoRD)(Zhang et al., 2018 ###reference_b61###).\nFor Italian, we evaluated the context column of the Stanford Question Answering Dataset (SQuAD)(Croce et al., 2018 ###reference_b15###; Rajpurkar et al., 2016 ###reference_b43###);\nfor Dutch, the sentence1 column of the Semantic Textual Similarity Benchmark (STSB)(Cer et al., 2017 ###reference_b12###); for German, the text column of the German News Articles Datasets 10k (GNAD10)(Schabus et al., 2017 ###reference_b50###);\nfor Swedish, the premise of the CB." + }, + { + "section_id": "4.2", + "parent_section_id": "4", + "section_name": "Annotation for the assumption confirmation", + "text": "To verify the assumption that toxic comments contain bias,\nwe randomly selected 200 samples from the training set of MAB-English for annotation on Slack, an online platform.\nThe selection of 200 samples is based on an error margin of 7% and a confidence level of 95%.\nTo ensure high-quality annotation, we use established techniques for this task: 1) the use of gold (30) samples, 2) multiple (i.e. 3) annotators, and 3) minimum qualification of undergraduate study for annotators.\nEach annotator was paid 25 U.S. dollars and the it took about 2 hours to complete the annotation on average.\nWe mixed the 30 gold samples with the 200, to verify the annotation quality of each annotator, as they were required to get, at least, 16 correctly for their annotation to be accepted.\nThe 30 gold samples are samples with unanimous agreement in the original Jigsaw or SBICv2 data, which make up the MAB.\nWe provide inter-annotator agreement (IAA)using Jaccard similarity coefficient (intersection over union) and credibility unanimous score (CUS)(Adewumi et al., 2023a ###reference_b1###) (intersection over sample size)." + }, + { + "section_id": "4.3", + "parent_section_id": "4", + "section_name": "Experiments", + "text": "We selected two state-of-the-art (SotA)pre-trained, multilingual models for experiments to compare their macro F1 performance: mT5-small and mBERT-base.\nThese are from the HuggingFace hub.\nWe further report the mT5 positive error rate of predictions.\nThe mT5-small has 300 million parameters (Xue et al., 2021 ###reference_b60###) while mBERT-Base has 110 million parameters.\nWe trained only on the MAB datasets and evaluated using only the mT5 model, the better model of the 2, as will be observed in Section 5 ###reference_###.\nFor the CB and ReCoRD datasets, we evaluate all samples since they contain only about 250 and 620 entries, respectively.\nWe used wandb (Biewald, 2020 ###reference_b8###) for hyper-parameter exploration, based on Bayesian optimization.\nFor mT5, we set the maximum and minimum learning rates as 5e-5 and 2e-5 while the maximum and minimum epochs are 20 and 4, respectively.\nOne epoch is equivalent to the ratio of the total number of samples to the batch size (i.e. the steps).\nWe used a batch size of 8 because higher numbers easily resulted in memory challenges.\nFor mBERT, we set the learning rates and epochs as with mT5.\nHowever, we explore over batch sizes of 8, 16 and 32.\nFor both models, we set the maximum input sequence length to 512.\nTraining took, on average, about 7.3 hours per language per epoch for mBERT while it was 6 hours for mT5.\nFor all the experiments, we limit the run counts to 2 per language because of the long training time each takes on average.\nThe average scores of the results are reported.\nThe saved models with the lowest losses were used to evaluate the datasets.\nAll the experiments were performed on two shared Nvidia DGX-1 machines that run Ubuntu 20.04 and 18.04.\nOne machine has 8 x 40GB A100 GPUs while the other has 8 x 32GB V100 GPUs.\nThe lexica baseline, compared in experiments, is similar to the equation of the second step in bipol.\nIt does not consider bias semantically and uses term frequencies, similarly to TF-IDF.\nIt uses the same lexica as bipol.\nIts maximum and minimum values are 1 and 0, respectively." + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Results and Discussion", + "text": "From Table 6 ###reference_###, we observe that all mT5 results are better than those of mBERT across the languages.\nThe two-sample t-test of the difference of means between all the corresponding mT5 and mBERT scores have p values 0.0001 for alpha of 0.05, showing the results are statistically significant.\nIt appears better hyper-parameter search may be required for the mBERT model to converge and achieve better performance.\nThe best macro F1 result is for English mT5 at 0.787.\nThis is not surprising, as English has the largest amount of training data for the pre-trained mT5 model (Xue et al., 2021 ###reference_b60###).\nThis occurred at the learning rate of 2.9e-5 and step 1,068,041.\nFigures 1 ###reference_### and 2 ###reference_### depict the validation sets macro F1 and loss line graphs for the 2 runs for the 5 languages, respectively.\nFrom Table 7 ###reference_###, we observe that all the evaluated datasets have biases, though seemingly little (but important) when compared to the maximum of 1.\nWe say important because many of the datasets contain small number of samples yet they can be detected.\nFurthermore, a low value does not necessarily diminish the weight of the effect of bias in society or the data but we leave the discussion about what amount should be tolerated open for the NLP community.\nOur recommendation is to have a bias score as close to zero as possible.\nOn the other hand, the lexica baseline appears overly confident of much more bias, which is incorrect because the method fails to exclude unbiased text in its evaluation, which is a shortcoming of methods based solely on it.\nThe Dutch STSB is higher than the other bipol scores because of the higher bipol classifier component score of 0.435, which may be because of the nature of the dataset.\n###figure_1### ###figure_2### ###figure_3###" + }, + { + "section_id": "5.1", + "parent_section_id": "5", + "section_name": "Error analysis & qualitative results", + "text": "According to the error matrix in Figure 3 ###reference_###, the mT5 model is better at correctly predicting unbiased samples.\nThis is because of the higher unbiased samples in the training data of MAB.\nIn Table 8 ###reference_###, the first example for the English CB contains a stereotypical statement \"men are naturally\nright and it is the role of women to follow their lead\", leading to the correct biased prediction by the model.\nSimilarly, this correct prediction is made in the Swedish CB.\nWe notice over-generalization (May et al., 2019 ###reference_b35###; Nadeem et al., 2021 ###reference_b39###) in the correct examples for the CoLA predictions, where \"every\" is used.\nThe table also shows some incorrect predictions.\n###table_2###" + }, + { + "section_id": "5.2", + "parent_section_id": "5", + "section_name": "Consistent prediction with perturbation", + "text": "An interesting property of relative consistency that we observed with the model predictions, as demonstrated with the CoLA dataset, is that when sentences are perturbed, the model mostly maintains its predictions, as long as the grounds for prediction (in this case - over-generalization) remain the same.\nThe perturbations are inherent in the CoLA dataset itself, as the dataset is designed that way.\nSome examples are provided in Table 9 ###reference_### in the Appendix, where 6 out of 8 are correctly predicted.\nThis property is repeated consistently in other examples not shown here." + }, + { + "section_id": "5.3", + "parent_section_id": "5", + "section_name": "Explainability by graphs", + "text": "We show explainability by visualization using graphs.\nBipol produces a dictionary of lists for every evaluation and we show the top-5 frequent terms bar graph for the GNAD10 dataset in Figure 4 ###reference_###, which has overall male bias.\nMany of the 10 evaluated datasets display overall male bias.\n###figure_4###" + }, + { + "section_id": "5.4", + "parent_section_id": "5", + "section_name": "Assumption confirmation through annotation", + "text": "The results of the annotation of the 200 MAB samples reveal that toxic comments do contain bias.\nThis is shown in Figure 5 ###reference_###.\nThe Jaccard similarity coefficient and CUS of IAA are 0.261666Not to be interpreted using Kappa for 2 annotators on 2 classes. Ours involved 3 annotators and 0.515, respectively, given that over 50% is the intersection of unanimous decision.\n###figure_5###" + }, + { + "section_id": "6", + "parent_section_id": null, + "section_name": "Conclusion", + "text": "The findings of this work show that bias besets Natural Language Processing (NLP)datasets regardless of language, including benchmark datasets on the GLUE/SuperGLUE leaderboards.\nWe introduced MAB datasets in 3 languages for training models in bias detection.\nEach has about 2 million labeled samples.\nWe also contribute lexica of bias terms for the languages.\nIn addition, we verified the assumption that toxic comments contain bias.\nIt may be impossible to completely remove bias from data or models, since they reflect the real world, but resources for estimating bias can provide insight into mitigation strategies for reducing bias.\nFuture work may explore ways of minimizing false positives in classifiers to make them more effective.\nOne may also explore how this work scales to other languages or how multilignual models compare to language-specific monolingual models or large language models (LLM s).\nRegarding culture-specific biases in datasets, one solution will be to collect data from the specific cultures/nationalities to capture these biases." + } + ], + "appendix": [ + { + "section_id": "Appendix 1", + "parent_section_id": null, + "section_name": "Appendix A Appendix", + "text": "Gender-female\n\n\ni) hon (she)\n\nii) hennes (her)\n\niii) flicka (girl)\n\niv) mor (mother)\n\nv) kvinna (woman)\n\nvi) dotter (daughter)\n\nvii) mormor (grandmother)\n\nviii) dam (lady)\n\nix) sondotter (son\u2019s daughter)\n\nx) dotterdotter (daughter\u2019s daughter)\n\nxi) tjej (girl)\n\nxii) tjejer (girls)\n\nxiii) gumma (old woman)\n\nxiv) fru (wife)\n\nxv) slampa (slut)\n\nxvi) slyna (slut)\n\nxvii) lebb (lesbian)\nGender-male \n\n\ni) han (he)\n\nii) pojke (boy)\n\niii) kille (boy)\n\niv) far (father)\n\nv) farfar (grandfather)\n\nvi) hans (his)\n\nvii) man (man)\n\nviii) son (son)\n\nix) make (husband)\n\nx) herre (lord)\n\nxi) herrar (lords)\n\nxii) sonson (son\u2019s son)\n\nxiii) dotterson (daughter\u2019s son)\n\nxiv) gubbe (old man)\n\nxv) farbro (uncle)\n\nxvi) broson (nephew)\n\nxvii) m\u00e4n (men)\nRacial-black \n\n\ni) neger (nigger)\n\nii) bl\u00e5neger (nigger)\n\niii) bl\u00e5land (nigger)\n\niv) blatte (dark immigrants)\n\nv) svartskalle (black head)\n\nvi) turk (non-Swedish)\n\nvii) sosse (political slur)\n\nviii) svarting (blackness)\n\nix) partysvenska (political slur)\n\nx) nigga (nigger)\nRacial-white \n\n\ni) svenne (Swedish stereotype)\n\nii) turk (non-Swedish)\n\niii) jugge (derogatory term)\n\niv) sosse (political slur)\n\nv) hurrare (Finnish Swedish)\n\nvi) lapp (Sami people)\n\nvii) Ang mo (red hair)\n\nviii) partysvenska (political slur)\n\nix) Ann (White woman)\n\nx) rutabaga (Swede)" + } + ], + "tables": { + "1": { + "table_html": "
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Metric/EvaluatorAxisTerms
Winogender (Rudinger et\u00a0al., 2018)\n160
WinoBias (Zhao et\u00a0al., 2018)\n140
StereoSet (Nadeem et\u00a0al., 2021)\n4321
GenBiT (Sengupta et\u00a0al., 2021)\n1-
CrowS-Pairs (Nangia et\u00a0al., 2020)\n93,016
Bipol (Alkhaled et\u00a0al., 2023)\n\n2, 13\n45, 466
\n
\n
Table 1: Comparison of some bias evaluation methods.
\n
", + "capture": "Table 1: Comparison of some bias evaluation methods." + }, + "2": { + "table_html": "
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
AxisType 1Type 2Type 3
Racial84 (black)127 (white)
Gender76 (female)46 (male)
Religious180 (christian)465 (muslim)179 (hindu)
\n
\n
Table 2: \nEnglish lexica sizes (Alkhaled et\u00a0al., 2023). An example of racial-black is negro.\n
\n
", + "capture": "Table 2: \nEnglish lexica sizes (Alkhaled et\u00a0al., 2023). An example of racial-black is negro.\n" + }, + "3": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
AxisType 1Type 2
Gender17 (female)17 (male)
Racial10 (black)10 (white)
\n
Table 3: \nItalian, Dutch, German and Swedish Lexica sizes. An example of gender-female is woman.\n
\n
", + "capture": "Table 3: \nItalian, Dutch, German and Swedish Lexica sizes. An example of gender-female is woman.\n" + }, + "4": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n\ncomment_text\n\n\n\nlabel\n\n\n\nold_id\n\n\n\nid\n\n
\n\nA fitness ranking that drops each month as more and more fat people move to Colorado.\n\n\n\nbiased\n\n\n\n6231681\n\n\n\n167\n\n
\n\nEen fitness ranking die elke maand daalt als meer en meer dikke mensen verhuizen naar Colorado.\n\n
\n\nEin Fitness-Ranking, das jeden Monat f\u00e4llt, wie mehr und mehr fette Menschen nach Colorado zu bewegen.\n\n
\n\nUn ranking di fitness che scende ogni mese come sempre pi\u00f9 persone grasse si spostano in Colorado.\n\n
\n\nEn fitness ranking som sjunker varje m\u00e5nad som fler och fler feta m\u00e4nniskor flyttar till Colorado.\n\n
\n\nWomen ! ! ! PERSON taught you better than that. Shame on you!\n\n\n\nbiased\n\n\n\n6233025\n\n\n\n628\n\n
\n\nVrouwen ! ! ! PERSOON heeft je beter geleerd dan dat. Je moet je schamen!\n\n
\n\nFrauen!!!! PERSON lehrte Sie besser als das. Schande \u00fcber Sie!\n\n
\n\nDonne ! ! ! Person ti ha insegnato meglio di cos\u00ec, vergognati!\n\n
\n\nKvinnor ! ! !- Han l\u00e4rde dig b\u00e4ttre \u00e4n s\u00e5. Sk\u00e4ms p\u00e5 dig!\n\n
\n
Table 4: \nEnglish, Dutch, German, Italian, and Swedish examples from the MAB dataset. \"PERSON\" is the anonymization of a piece of personal identifiable information (PII)in the dataset.\n
\n
", + "capture": "Table 4: \nEnglish, Dutch, German, Italian, and Swedish examples from the MAB dataset. \"PERSON\" is the anonymization of a piece of personal identifiable information (PII)in the dataset.\n" + }, + "5": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
SetBiasedUnbiasedTotal
Training533,5441,209,4331,742,977
Validation32,33869,649101,987
Test33,47068,541102,011
599,3521,347,6231,946,975
\n
Table 5: \nMAB dataset split\n
\n
", + "capture": "Table 5: \nMAB dataset split\n" + }, + "6": { + "table_html": "
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\nmacro F1 (s.d.)\nmT5 error \n
\nMAB version\nmBERTmT5\nfp/(fp +tp)
English0.418 (0.01)0.787 (0)0.261
Italian0.429 (0)0.768 (0)0.283
Dutch0.419 (0.01)0.768 (0)0.269
German0.418 (0.01)0.769 (0)0.261
Swedish0.418 (0.01)0.768 (0)0.274
\n
\n
Table 6: \nAverage F1 scores on the validation sets.\n
\n
", + "capture": "Table 6: \nAverage F1 scores on the validation sets.\n" + }, + "7": { + "table_html": "
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\nbipol scores \u00a0\u00a0 (s.d.)
Englishbipol (b)\nbaseline \n
CB0.0960.8750.084 (0)0.88
CoLA0.1010.9430.095 (0)0.958
ReCoRD0.0940.8520.025 (0)0.829
MRPC0.0480.9440.045 (0)0.957
MNLI0.0630.8330.053 (0)0.965
QNLI0.030.9330.028 (0)0.945
Italian
SQuAD0.01400.014 (0)0.989
Dutch
STSB0.4350.9920.432 (0)0.987
German
GNAD100.0490.5020.025 (0)1
Swedish
CB0.080.9380.075 (0)0.97
\n
\n
Table 7: Average bipol & lexica baseline scores.
\n
", + "capture": "Table 7: Average bipol & lexica baseline scores." + }, + "8": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n\ncomment_text\n\n\n\nprediction\n\n\n\ncsv id\n\n
Correct predictions from the English CB
\n\nnicky approached her with the assumption that men are naturally right and it is the role of women to follow their lead. constance whose confidence was growing daily was not prepared to give in to nickys wishes merely because of his sex. if she felt he was right then she agreed with him.\n\n\n\nbiased\n\n\n\n85\n\n
\n\nb: thats true. a: so. b: uh the other argument is that the death penalty is a deterrent and i really dont uh agree with that. i dont think anyone who would commit uh a crime that would get them the death penalty would stop at the moment and say well i was about to kill and dismember this person but oh if they catch me theyre going to kill me so i better not do it. i just dont think uh that it works that way. a: yeah. i dont think its done.\n\n\n\nbiased\n\n\n\n133\n\n
Incorrect prediction
\n\nb: yeah and the ground will filter some of it but not all of it. a: no not when you figure i didnt realize one cow produces that much manure\n\n\n\nbiased\n\n\n\n137\n\n
Correct predictions from the English CoLA
\n\nif you give him enough opportunity every senator will succumb to corruption.\n\n\n\nbiased\n\n\n\n266\n\n
\n\nevery senator becomes more corrupt the more lobbyists he talks to.\n\n\n\nbiased\n\n\n\n277\n\n
Incorrect prediction
\n\nbill squeezed the puppet through the hole.\n\n\n\nbiased\n\n\n\n82\n\n
\n
Table 8: \nQualitative examples of apparently correct and incorrect predictions in some of the English datasets, based on the mT5 model.\n
\n
", + "capture": "Table 8: \nQualitative examples of apparently correct and incorrect predictions in some of the English datasets, based on the mT5 model.\n" + }, + "9": { + "table_html": "
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\n\ncomment_text\n\n\n\nprediction\n\n\n\ncsv id\n\n
\n\nif you give him enough opportunity every senator will succumb to corruption.\n\n\n\nbiased\n\n\n\n266\n\n
\n\nyou give him enough opportunity and every senator will succumb to corruption.\n\n\n\nbiased\n\n\n\n267\n\n
\n\nwe gave him enough opportunity and sure enough every senator succumbed to corruption.\n\n\n\nunbiased\n\n\n\n268\n\n
\n\nif you give any senator enough opportunity he will succumb to corruption.\n\n\n\nbiased\n\n\n\n269\n\n
\n\nyou give any senator enough opportunity and he will succumb to corruption.\n\n\n\nbiased\n\n\n\n270\n\n
\n\nyou give every senator enough opportunity and he will succumb to corruption.\n\n\n\nbiased\n\n\n\n271\n\n
\n\nwe gave any senator enough opportunity and sure enough he succumbed to corruption.\n\n\n\nbiased\n\n\n\n272\n\n
\n\nwe gave every senator enough opportunity and sure enough he succumbed to corruption.\n\n\n\nunbiased\n\n\n\n273\n\n
\n
Table 9: \nMostly consistent correct prediction with perturbation in the CoLA dataset.\n
\n
", + "capture": "Table 9: \nMostly consistent correct prediction with perturbation in the CoLA dataset.\n" + } + }, + "image_paths": { + "1": { + "figure_path": "2404.04838v2_figure_1.png", + "caption": "Figure 1: Macro F1 of the validation set for the 5 languages, as generated by wandb.", + "url": "http://arxiv.org/html/2404.04838v2/extracted/5870181/mt5_f1.png" + }, + "2": { + "figure_path": "2404.04838v2_figure_2.png", + "caption": "Figure 2: Loss on the validation set for the 5 languages, as generated by wandb.", + "url": "http://arxiv.org/html/2404.04838v2/extracted/5870181/mt5_loss.png" + }, + "3": { + "figure_path": "2404.04838v2_figure_3.png", + "caption": "Figure 3: Error matrix of mT5 on MAB-English", + "url": "http://arxiv.org/html/2404.04838v2/extracted/5870181/ematrix.png" + }, + "4": { + "figure_path": "2404.04838v2_figure_4.png", + "caption": "Figure 4: Top 5 frequent terms in the GNAD10 dataset (paired terms are only for comparison).", + "url": "http://arxiv.org/html/2404.04838v2/extracted/5870181/gm_gnad.png" + }, + "5": { + "figure_path": "2404.04838v2_figure_5.png", + "caption": "Figure 5: Annotation confirms assumption about toxic comments.", + "url": "http://arxiv.org/html/2404.04838v2/extracted/5870181/annotate_res.png" + } + }, + "validation": true, + "references": [ + { + "1": { + "title": "Afriwoz: Corpus for exploiting cross-lingual transfer for dialogue generation in low-resource, african languages.", + "author": "Tosin Adewumi, Mofetoluwa Adeyemi, Aremu Anuoluwapo, Bukola Peters, Happy Buzaaba, Oyerinde Samuel, Amina Mardiyyah Rufai, Benjamin Ajibade, Tajudeen Gwadabe, Mory Moussou Koulibaly Traore, Tunde Oluwaseyi Ajayi, Shamsuddeen Muhammad, Ahmed Baruwa, Paul Owoicho, Tolulope Ogunremi, Phylis Ngigi, Orevaoghene Ahia, Ruqayya Nasir, Foteini Liwicki, and Marcus Liwicki. 2023a.", + "venue": "In 2023 International Joint Conference on Neural Networks (IJCNN), pages 1\u20138.", + "url": "https://doi.org/10.1109/IJCNN54540.2023.10191208" + } + }, + { + "2": { + "title": "Bipol: Multi-axes evaluation of bias with explainability in benchmark datasets.", + "author": "Tosin Adewumi, Isabella S\u00f6dergren, Lama Alkhaled, Sana Sabah Sabry, Foteini Liwicki, and Marcus Liwicki. 2023b.", + "venue": "In Proceedings of Recent Advances in Natural Language Processing (RANLP), Varna, Bulgaria.", + "url": null + } + }, + { + "3": { + "title": "Bipol: A novel multi-axes bias evaluation metric with explainability for nlp.", + "author": "Lama Alkhaled, Tosin Adewumi, and Sana Sabah Sabry. 2023.", + "venue": "Natural Language Processing Journal, 4:100030.", + "url": "https://doi.org/https://doi.org/10.1016/j.nlp.2023.100030" + } + }, + { + "4": { + "title": "Bad seeds: Evaluating lexical methods for bias measurement.", + "author": "Maria Antoniak and David Mimno. 2021.", + "venue": "In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1889\u20131904.", + "url": null + } + }, + { + "5": { + "title": "Entropy-based attention regularization frees unintended bias mitigation from lists.", + "author": "Giuseppe Attanasio, Debora Nozza, Dirk Hovy, and Elena Baralis. 2022.", + "venue": "arXiv preprint arXiv:2203.09192.", + "url": null + } + }, + { + "6": { + "title": "Ai fairness 360: An extensible toolkit for detecting, understanding, and mitigating unwanted algorithmic bias.", + "author": "Rachel K. E. Bellamy, Kuntal Dey, Michael Hind, Samuel C. Hoffman, Stephanie Houde, Kalapriya Kannan, Pranay Lohia, Jacquelyn Martino, Sameep Mehta, Aleksandra Mojsilovic, Seema Nagar, Karthikeyan Natesan Ramamurthy, John Richards, Diptikalyan Saha, Prasanna Sattigeri, Moninder Singh, Kush R. Varshney, and Yunfeng Zhang. 2018.", + "venue": null, + "url": "http://arxiv.org/abs/1810.01943" + } + }, + { + "7": { + "title": "Gender in danger? evaluating speech translation technology on the MuST-SHE corpus.", + "author": "Luisa Bentivogli, Beatrice Savoldi, Matteo Negri, Mattia A. Di Gangi, Roldano Cattoni, and Marco Turchi. 2020.", + "venue": "In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6923\u20136933, Online. Association for Computational Linguistics.", + "url": "https://doi.org/10.18653/v1/2020.acl-main.619" + } + }, + { + "8": { + "title": "Experiment tracking with weights and biases.", + "author": "Lukas Biewald. 2020.", + "venue": "Software available from wandb.com.", + "url": "https://www.wandb.com/" + } + }, + { + "9": { + "title": "Man is to computer programmer as woman is to homemaker? debiasing word embeddings.", + "author": "Tolga Bolukbasi, Kai-Wei Chang, James Y Zou, Venkatesh Saligrama, and Adam T Kalai. 2016.", + "venue": "Advances in neural information processing systems, 29.", + "url": null + } + }, + { + "10": { + "title": "Implicit Bias.", + "author": "Michael Brownstein. 2019.", + "venue": "In Edward N. Zalta, editor, The Stanford Encyclopedia of Philosophy, Fall 2019 edition. Metaphysics Research Lab, Stanford University.", + "url": null + } + }, + { + "11": { + "title": "Semantics derived automatically from language corpora contain human-like biases.", + "author": "Aylin Caliskan, Joanna J Bryson, and Arvind Narayanan. 2017.", + "venue": "Science, 356(6334):183\u2013186.", + "url": null + } + }, + { + "12": { + "title": "SemEval-2017 task 1: Semantic textual similarity multilingual and crosslingual focused evaluation.", + "author": "Daniel Cer, Mona Diab, Eneko Agirre, I\u00f1igo Lopez-Gazpio, and Lucia Specia. 2017.", + "venue": "In Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017), pages 1\u201314, Vancouver, Canada. Association for Computational Linguistics.", + "url": "https://doi.org/10.18653/v1/S17-2001" + } + }, + { + "13": { + "title": "Evaluating bias in Dutch word embeddings.", + "author": "Rodrigo Alejandro Ch\u00e1vez Mulsa and Gerasimos Spanakis. 2020.", + "venue": "In Proceedings of the Second Workshop on Gender Bias in Natural Language Processing, pages 56\u201371, Barcelona, Spain (Online). Association for Computational Linguistics.", + "url": "https://aclanthology.org/2020.gebnlp-1.6" + } + }, + { + "14": { + "title": "Towards cross-lingual generalization of translation gender bias.", + "author": "Won Ik Cho, Jiwon Kim, Jaeyeong Yang, and Nam Soo Kim. 2021.", + "venue": "In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, pages 449\u2013457.", + "url": null + } + }, + { + "15": { + "title": "Neural learning for question answering in italian.", + "author": "Danilo Croce, Alexandra Zelenanska, and Roberto Basili. 2018.", + "venue": "In AI*IA 2018 \u2013 Advances in Artificial Intelligence, pages 389\u2013402, Cham. Springer International Publishing.", + "url": null + } + }, + { + "16": { + "title": "The commitmentbank: Investigating projection in naturally occurring discourse.", + "author": "Marie-Catherine De Marneffe, Mandy Simons, and Judith Tonhauser. 2019.", + "venue": "In proceedings of Sinn und Bedeutung, volume 23, pages 107\u2013124.", + "url": null + } + }, + { + "17": { + "title": "Semi-supervised topic modeling for gender bias discovery in english and swedish.", + "author": "Hannah Devinney, 1974 Bj\u00f6rklund, Jenny, and Henrik Bj\u00f6rklund. 2020.", + "venue": "EQUITBL Proceedings of the Second Workshop on Gender Bias in Natural Language Processing, pages 79 \u2013 92.", + "url": "https://proxy.lib.ltu.se/login?url=https://search.ebscohost.com/login.aspx?direct=true&db=edsswe&AN=edsswe.oai.DiVA.org.uu.430350&lang=sv&site=eds-live&scope=site" + } + }, + { + "18": { + "title": "Automatically constructing a corpus of sentential paraphrases.", + "author": "William B. Dolan and Chris Brockett. 2005.", + "venue": "In Proceedings of the Third International Workshop on Paraphrasing (IWP2005).", + "url": "https://aclanthology.org/I05-5002" + } + }, + { + "19": { + "title": "Mdd@ ami: Vanilla classifiers for misogyny identification.", + "author": "Samer El Abassi and Sergiu Nisioi. 2020.", + "venue": "EVALITA Evaluation of NLP and Speech Tools for Italian-December 17th, 2020, page 55.", + "url": null + } + }, + { + "20": { + "title": "Multistage and elastic spam detection in mobile social networks through deep learning.", + "author": "Bo Feng, Qiang Fu, Mianxiong Dong, Dong Guo, and Qiang Li. 2018.", + "venue": "IEEE Network, 32(4):15\u201321.", + "url": null + } + }, + { + "21": { + "title": "Studying muslim stereotyping through microportrait extraction.", + "author": "Antske Fokkens, Nel Ruigrok, Camiel Beukeboom, Gagestein Sarah, and Wouter van Atteveldt. 2018.", + "venue": "In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018), Miyazaki, Japan. European Language Resources Association (ELRA).", + "url": "https://aclanthology.org/L18-1590" + } + }, + { + "22": { + "title": "How to split: the effect of word segmentation on gender bias in speech translation.", + "author": "Marco Gaido, Beatrice Savoldi, Luisa Bentivogli, Matteo Negri, and Marco Turchi. 2021.", + "venue": "In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, pages 3576\u20133589, Online. Association for Computational Linguistics.", + "url": "https://doi.org/10.18653/v1/2021.findings-acl.313" + } + }, + { + "23": { + "title": "Lipstick on a pig: Debiasing methods cover up systematic gender biases in word embeddings but do not remove them.", + "author": "Hila Gonen and Yoav Goldberg. 2019.", + "venue": "arXiv preprint arXiv:1903.03862.", + "url": null + } + }, + { + "24": { + "title": "Goldberg revisited: Pro-female evaluation bias and changed attitudes toward women by engineering students.", + "author": "Frances M Haemmerlie and Robert L Montgomery. 1991.", + "venue": "Journal of Social Behavior and Personality, 6(2):179.", + "url": null + } + }, + { + "25": { + "title": "Technologies for spam detection.", + "author": "Simon Heron. 2009.", + "venue": "Network Security, 2009(1):11\u201315.", + "url": null + } + }, + { + "26": { + "title": "Multilingual twitter corpus and baselines for evaluating demographic bias in hate speech recognition.", + "author": "Xiaolei Huang, Linzi Xing, Franck Dernoncourt, and Michael J Paul. 2020.", + "venue": "arXiv preprint arXiv:2002.10361.", + "url": null + } + }, + { + "27": { + "title": "Measuring gender bias in contextualized embeddings.", + "author": "Styliani Katsarou, Borja Rodr\u00edguez-G\u00e1lvez, and Jesse Shanahan. 2022.", + "venue": "Computer Sciences and Mathematics Forum, 3(1).", + "url": "https://doi.org/10.3390/cmsf2022003003" + } + }, + { + "28": { + "title": "Face recognition performance: Role of demographic information.", + "author": "Brendan F Klare, Mark J Burge, Joshua C Klontz, Richard W Vorder Bruegge, and Anil K Jain. 2012.", + "venue": "IEEE Transactions on Information Forensics and Security, 7(6):1789\u20131801.", + "url": null + } + }, + { + "29": { + "title": "These are not the stereotypes you are looking for: Bias and fairness in authorial gender attribution.", + "author": "Corina Koolen and Andreas van Cranenburgh. 2017.", + "venue": "In Proceedings of the First ACL Workshop on Ethics in Natural Language Processing, pages 12\u201322, Valencia, Spain. Association for Computational Linguistics.", + "url": "https://doi.org/10.18653/v1/W17-1602" + } + }, + { + "30": { + "title": "Clouded reality: News representations of culturally close and distant ethnic outgroups.", + "author": "Anne C Kroon, Damian Trilling, Toni GLA van der Meer, and Jeroen GF Jonkman. 2020.", + "venue": "Communications, 45(s1):744\u2013764.", + "url": null + } + }, + { + "31": { + "title": "Who\u2019s to fear? implicit sexual threat pre and post the \u201crefugee crisis\u201d.", + "author": "Anne C Kroon and Toni GLA van der Meer. 2021.", + "venue": "Journalism Practice, pages 1\u201317.", + "url": null + } + }, + { + "32": { + "title": "Cultural differences in bias? origin and gender bias in pre-trained german and french word embeddings.", + "author": "Mascha Kurpicz-Briki. 2020.", + "venue": null, + "url": null + } + }, + { + "33": { + "title": "A world full of stereotypes? further investigation on origin and gender bias in multi-lingual word embeddings.", + "author": "Mascha Kurpicz-Briki and Tomaso Leoni. 2021.", + "venue": "Frontiers in big Data, 4:20.", + "url": null + } + }, + { + "34": { + "title": "Stereotipi di genere e traduzione automatica dall\u2019inglese all\u2019italiano: uno studio di caso sul femminile nelle professioni.", + "author": "Alessandra Luccioli, Silvia Bernardini, and Raffaella Baccolini.", + "venue": null, + "url": null + } + }, + { + "35": { + "title": "On measuring social biases in sentence encoders.", + "author": "Chandler May, Alex Wang, Shikha Bordia, Samuel R. Bowman, and Rachel Rudinger. 2019.", + "venue": "In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 622\u2013628, Minneapolis, Minnesota. Association for Computational Linguistics.", + "url": "https://doi.org/10.18653/v1/N19-1063" + } + }, + { + "36": { + "title": "Grammatical gender associations outweigh topical gender bias in crosslinguistic word embeddings.", + "author": "Katherine McCurdy and Oguz Serbetci. 2020.", + "venue": "arXiv preprint arXiv:2005.08864.", + "url": null + } + }, + { + "37": { + "title": "A survey on bias and fairness in machine learning.", + "author": "Ninareh Mehrabi, Fred Morstatter, Nripsuta Saxena, Kristina Lerman, and Aram Galstyan. 2021.", + "venue": "ACM Computing Surveys (CSUR), 54(6):1\u201335.", + "url": null + } + }, + { + "38": { + "title": "Source-driven representations for hate speech detection.", + "author": "Flavio Merenda, Claudia Zaghi, Tommaso Caselli, and Malvina Nissim. 2018.", + "venue": "Computational Linguistics CLiC-it 2018, page 258.", + "url": null + } + }, + { + "39": { + "title": "StereoSet: Measuring stereotypical bias in pretrained language models.", + "author": "Moin Nadeem, Anna Bethke, and Siva Reddy. 2021.", + "venue": "In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 5356\u20135371, Online. Association for Computational Linguistics.", + "url": "https://doi.org/10.18653/v1/2021.acl-long.416" + } + }, + { + "40": { + "title": "CrowS-pairs: A challenge dataset for measuring social biases in masked language models.", + "author": "Nikita Nangia, Clara Vania, Rasika Bhalerao, and Samuel R. Bowman. 2020.", + "venue": "In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1953\u20131967, Online. Association for Computational Linguistics.", + "url": "https://doi.org/10.18653/v1/2020.emnlp-main.154" + } + }, + { + "41": { + "title": "Word embeddings and gender stereotypes in swedish and english.", + "author": "Rasmus Precenth. 2019.", + "venue": null, + "url": null + } + }, + { + "42": { + "title": "Saving face: Investigating the ethical concerns of facial recognition auditing.", + "author": "Inioluwa Deborah Raji, Timnit Gebru, Margaret Mitchell, Joy Buolamwini, Joonseok Lee, and Emily Denton. 2020.", + "venue": "In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, AIES \u201920, page 145\u2013151, New York, NY, USA. Association for Computing Machinery.", + "url": "https://doi.org/10.1145/3375627.3375820" + } + }, + { + "43": { + "title": "SQuAD: 100,000+ questions for machine comprehension of text.", + "author": "Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016.", + "venue": "In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2383\u20132392, Austin, Texas. Association for Computational Linguistics.", + "url": "https://doi.org/10.18653/v1/D16-1264" + } + }, + { + "44": { + "title": "Using tf-idf to determine word relevance in document queries.", + "author": "Juan Ramos et al. 2003.", + "venue": "In Proceedings of the first instructional conference on machine learning, volume 242, pages 29\u201348. Citeseer.", + "url": null + } + }, + { + "45": { + "title": "A case study of natural gender phenomena in translation. a comparison of google translate, bing microsoft translator and deepl for english to italian, french and spanish.", + "author": "Argentina Anna Rescigno, Eva Vanmassenhove, Johanna Monti, and Andy Way. 2020.", + "venue": "In CLiC-it.", + "url": null + } + }, + { + "46": { + "title": "Gender bias in coreference resolution.", + "author": "Rachel Rudinger, Jason Naradowsky, Brian Leonard, and Benjamin Van Durme. 2018.", + "venue": "In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 8\u201314, New Orleans, Louisiana. Association for Computational Linguistics.", + "url": "https://doi.org/10.18653/v1/N18-2002" + } + }, + { + "47": { + "title": "Gender bias in pretrained swedish embeddings.", + "author": "Magnus Sahlgren and Fredrik Olsson. 2019.", + "venue": "In Proceedings of the 22nd Nordic Conference on Computational Linguistics, NoDaLiDa 2019, Turku, Finland, September 30 - October 2, 2019, pages 35\u201343. Link\u00f6ping University Electronic Press.", + "url": "https://aclweb.org/anthology/W19-6104/" + } + }, + { + "48": { + "title": "Term-weighting approaches in automatic text retrieval.", + "author": "Gerard Salton and Christopher Buckley. 1988.", + "venue": "Information processing & management, 24(5):513\u2013523.", + "url": null + } + }, + { + "49": { + "title": "Social bias frames: Reasoning about social and power implications of language.", + "author": "Maarten Sap, Saadia Gabriel, Lianhui Qin, Dan Jurafsky, Noah A. Smith, and Yejin Choi. 2020.", + "venue": "In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5477\u20135490, Online. Association for Computational Linguistics.", + "url": "https://doi.org/10.18653/v1/2020.acl-main.486" + } + }, + { + "50": { + "title": "One million posts: A data set of german online discussions.", + "author": "Dietmar Schabus, Marcin Skowron, and Martin Trapp. 2017.", + "venue": "In Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR), pages 1241\u20131244, Tokyo, Japan.", + "url": "https://doi.org/10.1145/3077136.3080711" + } + }, + { + "51": { + "title": "Genbit: measure and mitigate gender bias in language datasets.", + "author": "Kinshuk Sengupta, Rana Maher, Declan Groves, and Chantal Olieman. 2021.", + "venue": "Microsoft Journal of Applied Research, 16:63\u201371.", + "url": "https://www.microsoft.com/en-us/research/publication/genbit-measure-and-mitigate-gender-bias-in-language-datasets/" + } + }, + { + "52": { + "title": "OPUS-MT \u2014 Building open translation services for the World.", + "author": "J\u00f6rg Tiedemann and Santhosh Thottingal. 2020.", + "venue": "In Proceedings of the 22nd Annual Conferenec of the European Association for Machine Translation (EAMT), Lisbon, Portugal.", + "url": null + } + }, + { + "53": { + "title": "gender-it: An annotated english-italian parallel challenge set for cross-linguistic natural gender phenomena.", + "author": "Eva Vanmassenhove and Johanna Monti. 2021.", + "venue": "arXiv preprint arXiv:2108.02854.", + "url": null + } + }, + { + "54": { + "title": "Superglue: A stickier benchmark for general-purpose language understanding systems.", + "author": "Alex Wang, Yada Pruksachatkun, Nikita Nangia, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. 2019.", + "venue": "In Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc.", + "url": "https://proceedings.neurips.cc/paper/2019/file4496bf24afe7fab6f046bf4923da8de6-Paper.pdf" + } + }, + { + "55": { + "title": "GLUE: A multi-task benchmark and analysis platform for natural language understanding.", + "author": "Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. 2018.", + "venue": "In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 353\u2013355, Brussels, Belgium. Association for Computational Linguistics.", + "url": "https://doi.org/10.18653/v1/W18-5446" + } + }, + { + "56": { + "title": "Neural network acceptability judgments.", + "author": "Alex Warstadt, Amanpreet Singh, and Samuel R Bowman. 2019.", + "venue": "Transactions of the Association for Computational Linguistics, 7:625\u2013641.", + "url": null + } + }, + { + "57": { + "title": "Using word embeddings to examine gender bias in dutch newspapers, 1950-1990.", + "author": "Melvin Wevers. 2019.", + "venue": "arXiv preprint arXiv:1907.08922.", + "url": null + } + }, + { + "58": { + "title": "A broad-coverage challenge corpus for sentence understanding through inference.", + "author": "Adina Williams, Nikita Nangia, and Samuel Bowman. 2018.", + "venue": "In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1112\u20131122, New Orleans, Louisiana. Association for Computational Linguistics.", + "url": "https://doi.org/10.18653/v1/N18-1101" + } + }, + { + "59": { + "title": "Transformers: State-of-the-art natural language processing.", + "author": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020.", + "venue": "In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38\u201345, Online. Association for Computational Linguistics.", + "url": "https://doi.org/10.18653/v1/2020.emnlp-demos.6" + } + }, + { + "60": { + "title": "mT5: A massively multilingual pre-trained text-to-text transformer.", + "author": "Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, and Colin Raffel. 2021.", + "venue": "In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 483\u2013498, Online. Association for Computational Linguistics.", + "url": "https://doi.org/10.18653/v1/2021.naacl-main.41" + } + }, + { + "61": { + "title": "Record: Bridging the gap between human and machine commonsense reading comprehension.", + "author": "Sheng Zhang, Xiaodong Liu, Jingjing Liu, Jianfeng Gao, Kevin Duh, and Benjamin Van Durme. 2018.", + "venue": "arXiv preprint arXiv:1810.12885.", + "url": null + } + }, + { + "62": { + "title": "Gender bias in coreference resolution: Evaluation and debiasing methods.", + "author": "Jieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Ordonez, and Kai-Wei Chang. 2018.", + "venue": "In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 15\u201320, New Orleans, Louisiana. Association for Computational Linguistics.", + "url": "https://doi.org/10.18653/v1/N18-2003" + } + } + ], + "url": "http://arxiv.org/html/2404.04838v2" +} \ No newline at end of file