{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T02:11:41.869085Z" }, "title": "Identifying and Measuring Annotator Bias Based on Annotators' Demographic Characteristics", "authors": [ { "first": "Hala", "middle": [], "last": "Al Kuwatly", "suffix": "", "affiliation": {}, "email": "hala.kuwatly@tum.de" }, { "first": "T", "middle": [ "U" ], "last": "Munich", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Maximilian", "middle": [], "last": "Wich", "suffix": "", "affiliation": {}, "email": "maximilian.wich@tum.de" }, { "first": "Georg", "middle": [], "last": "Groh", "suffix": "", "affiliation": {}, "email": "grohg@in.tum.de" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Machine learning is recently used to detect hate speech and other forms of abusive language in online platforms. However, a notable weakness of machine learning models is their vulnerability to bias, which can impair their performance and fairness. One type is annotator bias caused by the subjective perception of the annotators. In this work, we investigate annotator bias using classification models trained on data from demographically distinct annotator groups. To do so, we sample balanced subsets of data that are labeled by demographically distinct annotators. We then train classifiers on these subsets, analyze their performances on similarly grouped test sets, and compare them statistically. Our findings show that the proposed approach successfully identifies bias and that demographic features, such as first language, age, and education, correlate with significant performance differences.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "Machine learning is recently used to detect hate speech and other forms of abusive language in online platforms. However, a notable weakness of machine learning models is their vulnerability to bias, which can impair their performance and fairness. One type is annotator bias caused by the subjective perception of the annotators. In this work, we investigate annotator bias using classification models trained on data from demographically distinct annotator groups. To do so, we sample balanced subsets of data that are labeled by demographically distinct annotators. We then train classifiers on these subsets, analyze their performances on similarly grouped test sets, and compare them statistically. Our findings show that the proposed approach successfully identifies bias and that demographic features, such as first language, age, and education, correlate with significant performance differences.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "According to the online harassment report published by Pew Research Center, \"four-in-ten Americans have personally experienced online harassment, and 62% consider it a major issue.\" (Duggan, 2017, p.3) . Online environments such as social media and discussion forums have created spaces for people to express their opinions and viewpoints, but this comes at the cost of hateful, offensive, and abusive content. Moderating this content manually requires a lot of staff and large amounts of hand-curated policies, which generated much interest in automatic content moderation systems that make use of recent advances in machine learning (Schmidt and Wiegand, 2017) .", "cite_spans": [ { "start": 182, "end": 201, "text": "(Duggan, 2017, p.3)", "ref_id": null }, { "start": 635, "end": 662, "text": "(Schmidt and Wiegand, 2017)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "One challenge of training machine learning systems is the demand for large amounts of labeled * These authors contributed equally to this work.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "data. Hence, many researchers use crowdsourcing platforms to annotate their data sets Founta et al., 2018; Vidgen and Derczynski, 2020) , although having expert annotators has proven to improve the quality of annotations (Waseem, 2016) . Such crowdsourcing approaches, however, exposes hate speech detection systems to annotator bias. Hateful behavior can take many forms (Waseem et al., 2017) , making it harder to obtain a clean, common definition of hate speech, and resulting in subjective and biased annotations. Biases in the annotations are then absorbed and reinforced by the machine learning models, causing systematically unfair systems (Bender and Friedman, 2018) . Therefore, it is not surprising that a large body of work has identified and mitigated this bias (Bender and Friedman, 2018; Bountouridis et al., 2019; Dixon et al., 2018) .", "cite_spans": [ { "start": 86, "end": 106, "text": "Founta et al., 2018;", "ref_id": "BIBREF9" }, { "start": 107, "end": 135, "text": "Vidgen and Derczynski, 2020)", "ref_id": "BIBREF21" }, { "start": 221, "end": 235, "text": "(Waseem, 2016)", "ref_id": "BIBREF24" }, { "start": 372, "end": 393, "text": "(Waseem et al., 2017)", "ref_id": "BIBREF25" }, { "start": 647, "end": 674, "text": "(Bender and Friedman, 2018)", "ref_id": "BIBREF1" }, { "start": 774, "end": 801, "text": "(Bender and Friedman, 2018;", "ref_id": "BIBREF1" }, { "start": 802, "end": 828, "text": "Bountouridis et al., 2019;", "ref_id": "BIBREF3" }, { "start": 829, "end": 848, "text": "Dixon et al., 2018)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We already know that people with particular demographic characteristics (e.g., black, disabled, or younger people) become more frequently targets of hate (Vidgen et al., 2019b) . An aspect that is sparsely investigated in this context is the relation between annotators' demographic features and a potential bias in the data set. We want to fill this gap by addressing the following research question:", "cite_spans": [ { "start": 154, "end": 176, "text": "(Vidgen et al., 2019b)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "How do annotators' demographic features such as gender, age, education and first language impact their annotations of hateful content?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "To answer this question, we conduct the following exploratory study: We sample balanced subsets of data that are labeled by demographically distinct annotators. We then train classifiers on these subsets, analyze their performances on similarly split test sets, and compare them statistically.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Since unintended bias in hate speech datasets can impair the model's performance (Waseem, 2016) and fairness (Vidgen et al., 2019a; Dixon et al., 2018) , a lot of recent work has been done to investigate this phenomenon (Wiegand et al., 2019; Kim et al., 2020) .", "cite_spans": [ { "start": 81, "end": 95, "text": "(Waseem, 2016)", "ref_id": "BIBREF24" }, { "start": 109, "end": 131, "text": "(Vidgen et al., 2019a;", "ref_id": "BIBREF22" }, { "start": 132, "end": 151, "text": "Dixon et al., 2018)", "ref_id": "BIBREF7" }, { "start": 220, "end": 242, "text": "(Wiegand et al., 2019;", "ref_id": "BIBREF28" }, { "start": 243, "end": 260, "text": "Kim et al., 2020)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "2" }, { "text": "Some work examined racial bias (Sap et al., 2019; Davidson et al., 2019; Xia et al., 2020) , others explored gender bias (Gold and Zesch, 2018), aggregation bias (Balayn et al., 2018) and political bias (Wich et al., 2020b) . The type of bias we are examining in this study is the annotator bias. Waseem (2016) studied the influence of annotator expertise on classification models and found that systems trained on expert annotations outperform those trained on amateur annotations, confirming and extending the results from Ross et al. (2017) . Geva et al. (2019) showed that model performance improves when exposed to annotator identifiers, which suggests that annotator bias needs to be considered when creating hate speech models. Salminen et al. (2018) studied the difference between annotations of crowd workers from 50 countries and found those differences highly significant. Binns et al. (2017) examined the effect of the gender of the annotators on the performance of classifiers. Wich et al. (2020a) studied the similarities in the behaviour of the annotators to reveal biases that they bring into the data.", "cite_spans": [ { "start": 31, "end": 49, "text": "(Sap et al., 2019;", "ref_id": "BIBREF17" }, { "start": 50, "end": 72, "text": "Davidson et al., 2019;", "ref_id": "BIBREF4" }, { "start": 73, "end": 90, "text": "Xia et al., 2020)", "ref_id": "BIBREF31" }, { "start": 162, "end": 183, "text": "(Balayn et al., 2018)", "ref_id": "BIBREF0" }, { "start": 203, "end": 223, "text": "(Wich et al., 2020b)", "ref_id": "BIBREF27" }, { "start": 525, "end": 543, "text": "Ross et al. (2017)", "ref_id": "BIBREF14" }, { "start": 546, "end": 564, "text": "Geva et al. (2019)", "ref_id": "BIBREF10" }, { "start": 735, "end": 757, "text": "Salminen et al. (2018)", "ref_id": "BIBREF15" }, { "start": 884, "end": 903, "text": "Binns et al. (2017)", "ref_id": "BIBREF2" }, { "start": 991, "end": 1010, "text": "Wich et al. (2020a)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "2" }, { "text": "To the best of our knowledge, no one has developed a method to identify annotator bias based on multiple demographic characteristics of the annotators and measure its impact on the classification performance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "2" }, { "text": "We used the personal attack corpora from Wikipedia's Detox project (Wulczyn et al., 2017) , which contains 115,864 labeled comments from Wikipedia on whether the comment contains a form of personal attack. The labels are the following (Wikimedia, n.d ", "cite_spans": [ { "start": 67, "end": 89, "text": "(Wulczyn et al., 2017)", "ref_id": "BIBREF30" }, { "start": 235, "end": 250, "text": "(Wikimedia, n.d", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Data", "sec_num": "3" }, { "text": "\u2022 Quoting attack: Indicator for whether the annotator thought the comment is quoting or reporting a personal attack that originated in a different comment.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": ".):", "sec_num": null }, { "text": "\u2022 Recipient attack: Indicator for whether the annotator thought the comment contains a personal attack directed at the recipient of the comment.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": ".):", "sec_num": null }, { "text": "\u2022 Third party attack: Indicator for whether the annotator thought the comment contains a personal attack directed at a third party.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": ".):", "sec_num": null }, { "text": "\u2022 Other attack: Indicator for whether the annotator thought the comment contains a personal attack but is not quoting attack, a recipient attack or third party attack.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": ".):", "sec_num": null }, { "text": "\u2022 Attack: Indicator for whether the annotator thought the comment contains any form of personal attack. (Wikimedia, n.d.) For our study, we used the attack label as the classification target label, not taking into consideration the other labels.", "cite_spans": [ { "start": 104, "end": 121, "text": "(Wikimedia, n.d.)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": ".):", "sec_num": null }, { "text": "The comments were labeled by 4,053 crowdworkers. For 2,190 of them, we have the demographic information. For each of these annotators we have the following demographic features:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": ".):", "sec_num": null }, { "text": "\u2022 Gender: 'male' or 'female'", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": ".):", "sec_num": null }, { "text": "\u2022 English first language: '1' or '0'; '1' = annotator's first language is English", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": ".):", "sec_num": null }, { "text": "\u2022 Age group:'Under 18', '18-30', '30-45', '45-60', 'Over 60'. Since annotators are not equally distributed across age groups (see distribution plot in the appendix), we changed the grouping to 'Under 30' and 'Over 30'.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": ".):", "sec_num": null }, { "text": "\u2022 Education (highest obtained education level): 'none', 'some', 'hs', 'bachelors', 'masters', 'doctorate', 'professional'. 'hs' is short for high school. Since annotators are not equally distributed across education levels (see distribution plot in the appendix), we changed the grouping to 'Below hs' (includes hs) and 'Above hs'.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": ".):", "sec_num": null }, { "text": "We address the research question by training classification models on data from demographically distinct groups and comparing their performances 1 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Methodology", "sec_num": "4" }, { "text": "1 Code available on GitHub: https://github.com/mawic/ annotator-bias-demographic-characteristics", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Methodology", "sec_num": "4" }, { "text": "The hypothesis is that a statistically significant difference between the classifiers' performances indicates an annotator bias related to the studied demographic feature.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Methodology", "sec_num": "4" }, { "text": "In the first step, we group the annotators by their demographic features, such as gender, age, education level, and native language. For each of those features, we create m + 1 datasets where m is the number of different values a demographic feature can take, e.g. for gender m could be equal to 2 if we only consider male and female annotators. All datasets have the same comments, but with different labels aggregated from annotators belonging to each different group. The additional dataset ( +1) has labels aggregated from annotators belonging to all groups. It serves as a control group. We call this dataset the mixed dataset. We measured the inter-rater agreement within each group using Krippendorff's alpha (Hayes and Krippendorff, 2007) .", "cite_spans": [ { "start": 716, "end": 746, "text": "(Hayes and Krippendorff, 2007)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Methodology", "sec_num": "4" }, { "text": "In the second step, we split the datasets into train and test sets, and train 20 classifiers for each group on the group's training set and report F1 scores for all test sets. We train 20 classifiers to get multiple data points for each group's classifier and then apply the Kolmogorov-Smirnov test to examine whether they are significantly different 2 . The null hypothesis in this context is that the two samples are drawn from the same distribution. If we can reject the null hypothesis (p < 0.05) for a certain demographic feature, this will be evidence that annotators belonging to different groups of feature values hold different norms and are bringing in different biases into their annotations. Concerning the classification model, we chose to make use of recent advancements in transfer learning and employ DistilBERT as a classifier due to the limited number of data points annotated by each group. DistilBERT (Sanh et al., 2019 ) is a smaller and faster distilled version of BERT (Devlin et al., 2018) . In the context of abusive language detection, it provides a comparable performance . We used the base uncased version of DistilBERT (distilbert-base-uncased) with a maximum sequence length of 100, a learning rate of 5 \u00d7 10 \u22126 , and 1cycle learning rate policy (Smith, 2018) and trained each classifier for 2 epochs. 2 We trained 20 classifiers only for practical constraints.", "cite_spans": [ { "start": 921, "end": 939, "text": "(Sanh et al., 2019", "ref_id": "BIBREF16" }, { "start": 992, "end": 1013, "text": "(Devlin et al., 2018)", "ref_id": "BIBREF6" }, { "start": 1332, "end": 1333, "text": "2", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Methodology", "sec_num": "4" }, { "text": "To ensure the comparability of the classifiers, it is necessary to compile the training and test sets in the right way. Therefore, we define the following 2 conditions for selecting the comments: (1) All data sets of one feature contain the same comments. (2) At least 6 annotators from each demographic group annotated the comment. In the case of the gender group, that means a selected comment was annotated by at least 6 male and 6 female annotators.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data split", "sec_num": "4.1" }, { "text": "For each demographic feature, we create 3 training and test set combinations. In the first one, the labels are taken from a random set of 6 annotators belonging to the first demographic group (e.g., males). In the second one, the labels of the comments are taken from a random set of 6 annotators belonging to the second demographic group (e.g., females). The third train and test sets are mixed: the labels of the comments are taken from a random set of 3 annotators belonging to the first demographic group and 3 annotators belonging to the second demographic group. While the subset of comments stays unchanged, for each of the 20 classifiers we sample the annotations of different random annotators. Data sets' sizes can be found in Table 1 .", "cite_spans": [], "ref_spans": [ { "start": 737, "end": 744, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Data split", "sec_num": "4.1" }, { "text": "We also performed the same experiments without the limitation of sharing the same comments in the data sets of each feature, in order to increase the size of comments in the splits. Results were very similar to our shared comments experiments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data split", "sec_num": "4.1" }, { "text": "In this section, we report the results of our experiments for each demographic feature. The results comprise the inter-rater agreement of the annotators in the different groups, the averaged F1 scores of the trained classifiers, the sensitivity and specificity of the classifiers as charts, and the p-values generated by the Kolmogorov-Smirnov tests.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "5" }, { "text": "In regards to gender, we could not find evidence of any significant difference between male and female classifiers. Although the inter-rater agreement is significantly lower for females (0.45) than for males (0.51) (Table 4) , the average F1 scores of the 20 classifiers trained for each group show no significant difference (Table 2) . When analyzing the sensitivity and specificity graphs in Figure 1a , one can also see no significant pattern or trend. The p-value resulting from the Kolmogorov-Smirnov test applied on the F1 scores of the 20 male classifiers and 20 female classifiers evaluated on the mixed test set is 0.83 (Table 3 ). Since it is larger than 0.05, we cannot conclude that a significant difference between the male and female classifier exists.", "cite_spans": [], "ref_spans": [ { "start": 215, "end": 224, "text": "(Table 4)", "ref_id": "TABREF6" }, { "start": 325, "end": 334, "text": "(Table 2)", "ref_id": "TABREF3" }, { "start": 394, "end": 403, "text": "Figure 1a", "ref_id": "FIGREF1" }, { "start": 629, "end": 637, "text": "(Table 3", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Gender", "sec_num": "5.1" }, { "text": "Our experiments on first language classifiers resulted in the following observations: 1. Classifiers trained on native-labeled data have a notably higher F1 score (Table 2) and are also more sensitive to all test sets (the blue triangles in Figure 1b) , which suggests that they are particularly better at classifying comments that contain personal attack.", "cite_spans": [], "ref_spans": [ { "start": 163, "end": 172, "text": "(Table 2)", "ref_id": "TABREF3" }, { "start": 241, "end": 251, "text": "Figure 1b)", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "First Language", "sec_num": "5.2" }, { "text": "2. Classifiers trained on only non-native-labeled data perform almost as good as the baseline (classifier trained on mix-labeled data) ( Table 2) .", "cite_spans": [], "ref_spans": [ { "start": 137, "end": 146, "text": "Table 2)", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "First Language", "sec_num": "5.2" }, { "text": "3. We found very minor disparities in the specificity of both classifiers (Figure 1b ).", "cite_spans": [], "ref_spans": [ { "start": 74, "end": 84, "text": "(Figure 1b", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "First Language", "sec_num": "5.2" }, { "text": "The result of the Kolmogorov-Smirnov test on native and non-native classifiers is a p-value of 1.0 \u00d7 10 \u22123 (Table 3) , thus we can reject the null hypothesis and conclude that a significant difference does exist between them.", "cite_spans": [], "ref_spans": [ { "start": 107, "end": 116, "text": "(Table 3)", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "First Language", "sec_num": "5.2" }, { "text": "Our experiments resulted in the following observations:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Age group", "sec_num": "5.3" }, { "text": "1. Classifiers trained on over-30-labeled data have higher F1 scores than classifiers trained on under-30 labeled data on all test sets. They are however comparable to the baseline (classifier trained on mix-labeled data) ( Table 2 ).", "cite_spans": [], "ref_spans": [ { "start": 224, "end": 231, "text": "Table 2", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Age group", "sec_num": "5.3" }, { "text": "2. All classifiers are less sensitive to over-30labeled test set (Figure 1c) , which might suggest that it contains harder examples that all classifiers failed to correctly classify. The Kolmogorov-Smirnov test on the results of the two classifiers produces a p-value of 1.1 \u00d7 10 \u22128 (Table 3) , thus we can reject that they come from the same distribution and conclude that a significant difference does exist between them.", "cite_spans": [], "ref_spans": [ { "start": 65, "end": 76, "text": "(Figure 1c)", "ref_id": "FIGREF1" }, { "start": 283, "end": 292, "text": "(Table 3)", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Age group", "sec_num": "5.3" }, { "text": "Our experiments resulted in the following observations:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Education", "sec_num": "5.4" }, { "text": "1. The F1 scores of the classifiers trained on below-hs-labeled data are higher than scores of classifiers trained on above-hs-labeled data on all test sets (Table 2) .", "cite_spans": [], "ref_spans": [ { "start": 157, "end": 166, "text": "(Table 2)", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Education", "sec_num": "5.4" }, { "text": "2. Classifiers trained on below-hs-labeled data have a comparable specificity to the other classifiers but with a notably higher sensitivity on all test sets. (Figure 1d ).", "cite_spans": [], "ref_spans": [ { "start": 159, "end": 169, "text": "(Figure 1d", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Education", "sec_num": "5.4" }, { "text": "The Kolmogorov-Smirnov test with a p-value of 1.4 \u00d7 10 \u22127 (Table 3) also shows that there exists a significant difference between the two groups.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Education", "sec_num": "5.4" }, { "text": "In light of our results, we can conclude that the gender of the annotator does not bring a significant bias in annotating personal attacks in the studied dataset. However, when Binns et al. (2017) explored the role of gender in offensive content annotations, they established a distinguishable difference between males and females. We think this is related to the nature of the annotation task itself. To investigate other tasks, our approach can further be applied in future work on the other data sets provided by Wikipedia's Detox project (Wulczyn et al., 2017) such as aggressiveness and toxicity to investigate the effects of gender for those tasks.", "cite_spans": [ { "start": 177, "end": 196, "text": "Binns et al. (2017)", "ref_id": "BIBREF2" }, { "start": 542, "end": 564, "text": "(Wulczyn et al., 2017)", "ref_id": "BIBREF30" } ], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "6" }, { "text": "When it comes to the first language of the annotators, it seems that native English speakers are gen-erally better at identifying personal attacks in comments. The results also suggest that non-natives could not capture attack in comments that natives found to contain attack.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "6" }, { "text": "In addition, age groups and education levels of the annotators also seem to play a notable role in how attacks are perceived. Training a classifier on aggregated labels from all groups, even if the data is balanced between groups, does not seem to be fair to all groups involved.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "6" }, { "text": "Although we have only explored the demographic features provided by the data set and grouped some of them for reasons dictated by the data size, we think other features (e.g., race, ethnicity, and political orientation), different within feature groupings and feature intersections might produce new biases. While exploring all possible demographic features prior to building models is simply infeasible, the set of studied features can be determined per task.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "6" }, { "text": "Our approach demonstrated how particular training sets labeled by different groups of people can be used to identify and measure bias in data sets. These biases are never constant or static even within one group, for what counts as hateful is always subjective. In consequence, having only one version of ground truth is bound to produce biased systems. It is inevitable that training models on biased datasets produces systems that amplify those biases, whether these biases are exclusionary, prejudicial, or historical. Therefore and due to the conflicting and ever-changing definitions of hate speech among communities, we urge researchers in the hate speech domain to examine their data sets closely and thoroughly in order to understand their limitations and consequences.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "6" }, { "text": "This work explored bias in hate speech classification models where the task is inherently controversial and annotators' demographic data might influence the labels. We demonstrate how particular demographic features might bias the models in ways that are important to look into prior to using such models in production. We explored the performance of classification models trained and tested on different training and test data splits, in order to identify the fairness of these classifiers and the biases they absorb. We hope that our proposed method for identifying and measuring annotator bias based on annotators' demographic characteris-tics will help to build fairer hate speech classifiers.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" } ], "back_matter": [ { "text": "This research has been partially funded by a scholarship from the Hanns Seidel Foundation financed by the German Federal Ministry of Education and Research.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Characterising and mitigating aggregationbias in crowdsourced toxicity annotations", "authors": [ { "first": "Agathe", "middle": [], "last": "Balayn", "suffix": "" }, { "first": "Panagiotis", "middle": [], "last": "Mavridis", "suffix": "" }, { "first": "Alessandro", "middle": [], "last": "Bozzon", "suffix": "" }, { "first": "Benjamin", "middle": [], "last": "Timmermans", "suffix": "" }, { "first": "Zolt\u00e1n", "middle": [], "last": "Szl\u00e1vik", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 1st Workshop on Subjectivity, Ambiguity and Disagreement in Crowdsourcing, and Short Paper Proceedings of the 1st Workshop on Disentangling the Relation Between Crowdsourcing and Bias Management", "volume": "2276", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Agathe Balayn, Panagiotis Mavridis, Alessandro Boz- zon, Benjamin Timmermans, and Zolt\u00e1n Szl\u00e1vik. 2018. Characterising and mitigating aggregation- bias in crowdsourced toxicity annotations. In Pro- ceedings of the 1st Workshop on Subjectivity, Am- biguity and Disagreement in Crowdsourcing, and Short Paper Proceedings of the 1st Workshop on Disentangling the Relation Between Crowdsourcing and Bias Management, volume 2276. CEUR.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Data statements for natural language processing: Toward mitigating system bias and enabling better science", "authors": [ { "first": "M", "middle": [], "last": "Emily", "suffix": "" }, { "first": "Batya", "middle": [], "last": "Bender", "suffix": "" }, { "first": "", "middle": [], "last": "Friedman", "suffix": "" } ], "year": 2018, "venue": "Transactions of the Association for Computational Linguistics", "volume": "6", "issue": "", "pages": "587--604", "other_ids": {}, "num": null, "urls": [], "raw_text": "Emily M Bender and Batya Friedman. 2018. Data statements for natural language processing: Toward mitigating system bias and enabling better science. Transactions of the Association for Computational Linguistics, 6:587-604.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Like trainer, like bot? inheritance of bias in algorithmic content moderation", "authors": [ { "first": "Reuben", "middle": [], "last": "Binns", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Veale", "suffix": "" }, { "first": "Max", "middle": [], "last": "Van Kleek", "suffix": "" }, { "first": "Nigel", "middle": [], "last": "Shadbolt", "suffix": "" } ], "year": 2017, "venue": "International conference on social informatics", "volume": "", "issue": "", "pages": "405--415", "other_ids": {}, "num": null, "urls": [], "raw_text": "Reuben Binns, Michael Veale, Max Van Kleek, and Nigel Shadbolt. 2017. Like trainer, like bot? in- heritance of bias in algorithmic content moderation. In International conference on social informatics, pages 405-415. Springer.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Annotating credibility: Identifying and mitigating bias in credibility datasets", "authors": [ { "first": "Dimitrios", "middle": [], "last": "Bountouridis", "suffix": "" }, { "first": "Mykola", "middle": [], "last": "Makhortykh", "suffix": "" }, { "first": "Emily", "middle": [], "last": "Sullivan", "suffix": "" }, { "first": "Jaron", "middle": [], "last": "Harambam", "suffix": "" }, { "first": "Nava", "middle": [], "last": "Tintarev", "suffix": "" }, { "first": "Claudia", "middle": [], "last": "Hauff", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dimitrios Bountouridis, Mykola Makhortykh, Emily Sullivan, Jaron Harambam, Nava Tintarev, and Clau- dia Hauff. 2019. Annotating credibility: Identifying and mitigating bias in credibility datasets.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Racial bias in hate speech and abusive language detection datasets", "authors": [ { "first": "Thomas", "middle": [], "last": "Davidson", "suffix": "" }, { "first": "Debasmita", "middle": [], "last": "Bhattacharya", "suffix": "" }, { "first": "Ing", "middle": [], "last": "", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1905.12516" ] }, "num": null, "urls": [], "raw_text": "Thomas Davidson, Debasmita Bhattacharya, and Ing- mar Weber. 2019. Racial bias in hate speech and abusive language detection datasets. arXiv preprint arXiv:1905.12516.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Automated hate speech detection and the problem of offensive language", "authors": [ { "first": "Thomas", "middle": [], "last": "Davidson", "suffix": "" }, { "first": "Dana", "middle": [], "last": "Warmsley", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Macy", "suffix": "" }, { "first": "Ingmar", "middle": [], "last": "Weber", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1703.04009" ] }, "num": null, "urls": [], "raw_text": "Thomas Davidson, Dana Warmsley, Michael Macy, and Ingmar Weber. 2017. Automated hate speech detection and the problem of offensive language. arXiv preprint arXiv:1703.04009.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Bert: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1810.04805" ] }, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing. arXiv preprint arXiv:1810.04805.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Measuring and mitigating unintended bias in text classification", "authors": [ { "first": "Lucas", "middle": [], "last": "Dixon", "suffix": "" }, { "first": "John", "middle": [], "last": "Li", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Sorensen", "suffix": "" }, { "first": "Nithum", "middle": [], "last": "Thain", "suffix": "" }, { "first": "Lucy", "middle": [], "last": "Vasserman", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society", "volume": "", "issue": "", "pages": "67--73", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lucas Dixon, John Li, Jeffrey Sorensen, Nithum Thain, and Lucy Vasserman. 2018. Measuring and mitigat- ing unintended bias in text classification. In Pro- ceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society, pages 67-73.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Online Harassment", "authors": [ { "first": "Maeve", "middle": [], "last": "Duggan", "suffix": "" } ], "year": 2017, "venue": "Pew Research Center", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Maeve Duggan. 2017. Online Harassment 2017. Pew Research Center.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Large scale crowdsourcing and characterization of twitter abusive behavior", "authors": [ { "first": "Antigoni-Maria", "middle": [], "last": "Founta", "suffix": "" }, { "first": "Constantinos", "middle": [], "last": "Djouvas", "suffix": "" }, { "first": "Despoina", "middle": [], "last": "Chatzakou", "suffix": "" }, { "first": "Ilias", "middle": [], "last": "Leontiadis", "suffix": "" }, { "first": "Jeremy", "middle": [], "last": "Blackburn", "suffix": "" }, { "first": "Gianluca", "middle": [], "last": "Stringhini", "suffix": "" }, { "first": "Athena", "middle": [], "last": "Vakali", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Sirivianos", "suffix": "" }, { "first": "Nicolas", "middle": [], "last": "Kourtellis", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1802.00393" ] }, "num": null, "urls": [], "raw_text": "Antigoni-Maria Founta, Constantinos Djouvas, De- spoina Chatzakou, Ilias Leontiadis, Jeremy Black- burn, Gianluca Stringhini, Athena Vakali, Michael Sirivianos, and Nicolas Kourtellis. 2018. Large scale crowdsourcing and characterization of twitter abusive behavior. arXiv preprint arXiv:1802.00393.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Are we modeling the task or the annotator? an investigation of annotator bias in natural language understanding datasets", "authors": [ { "first": "Mor", "middle": [], "last": "Geva", "suffix": "" }, { "first": "Yoav", "middle": [], "last": "Goldberg", "suffix": "" }, { "first": "Jonathan", "middle": [], "last": "Berant", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1908.07898" ] }, "num": null, "urls": [], "raw_text": "Mor Geva, Yoav Goldberg, and Jonathan Berant. 2019. Are we modeling the task or the annotator? an inves- tigation of annotator bias in natural language under- standing datasets. arXiv preprint arXiv:1908.07898.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Do women perceive hate differently: Examining the relationship between hate speech, gender, and agreement judgments", "authors": [ { "first": "Darina", "middle": [], "last": "Michael Wojatzki Tobias Horsmann", "suffix": "" }, { "first": "Torsten", "middle": [], "last": "Gold", "suffix": "" }, { "first": "", "middle": [], "last": "Zesch", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michael Wojatzki Tobias Horsmann Darina Gold and Torsten Zesch. 2018. Do women perceive hate dif- ferently: Examining the relationship between hate speech, gender, and agreement judgments.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Answering the call for a standard reliability measure for coding data. Communication methods and measures", "authors": [ { "first": "F", "middle": [], "last": "Andrew", "suffix": "" }, { "first": "Klaus", "middle": [], "last": "Hayes", "suffix": "" }, { "first": "", "middle": [], "last": "Krippendorff", "suffix": "" } ], "year": 2007, "venue": "", "volume": "1", "issue": "", "pages": "77--89", "other_ids": {}, "num": null, "urls": [], "raw_text": "Andrew F Hayes and Klaus Krippendorff. 2007. An- swering the call for a standard reliability measure for coding data. Communication methods and mea- sures, 1(1):77-89.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Intersectional bias in hate speech and abusive language datasets", "authors": [ { "first": "Jae Yeon", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Carlos", "middle": [], "last": "Ortiz", "suffix": "" }, { "first": "Sarah", "middle": [], "last": "Nam", "suffix": "" }, { "first": "Sarah", "middle": [], "last": "Santiago", "suffix": "" }, { "first": "Vivek", "middle": [], "last": "Datta", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2005.05921" ] }, "num": null, "urls": [], "raw_text": "Jae Yeon Kim, Carlos Ortiz, Sarah Nam, Sarah Santi- ago, and Vivek Datta. 2020. Intersectional bias in hate speech and abusive language datasets. arXiv preprint arXiv:2005.05921.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Measuring the reliability of hate speech annotations: The case of the european refugee crisis", "authors": [ { "first": "Bj\u00f6rn", "middle": [], "last": "Ross", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Rist", "suffix": "" }, { "first": "Guillermo", "middle": [], "last": "Carbonell", "suffix": "" }, { "first": "Benjamin", "middle": [], "last": "Cabrera", "suffix": "" }, { "first": "Nils", "middle": [], "last": "Kurowsky", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Wojatzki", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1701.08118" ] }, "num": null, "urls": [], "raw_text": "Bj\u00f6rn Ross, Michael Rist, Guillermo Carbonell, Ben- jamin Cabrera, Nils Kurowsky, and Michael Wo- jatzki. 2017. Measuring the reliability of hate speech annotations: The case of the european refugee crisis. arXiv preprint arXiv:1701.08118.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Online hate interpretation varies by country, but more by individual: A statistical analysis using crowdsourced ratings", "authors": [ { "first": "Joni", "middle": [], "last": "Salminen", "suffix": "" }, { "first": "Fabio", "middle": [], "last": "Veronesi", "suffix": "" }, { "first": "Hind", "middle": [], "last": "Almerekhi", "suffix": "" }, { "first": "Soon-Gvo", "middle": [], "last": "Jung", "suffix": "" }, { "first": "Bernard J", "middle": [], "last": "Jansen", "suffix": "" } ], "year": 2018, "venue": "2018 Fifth International Conference on Social Networks Analysis, Management and Security (SNAMS)", "volume": "", "issue": "", "pages": "88--94", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joni Salminen, Fabio Veronesi, Hind Almerekhi, Soon- Gvo Jung, and Bernard J Jansen. 2018. Online hate interpretation varies by country, but more by indi- vidual: A statistical analysis using crowdsourced rat- ings. In 2018 Fifth International Conference on So- cial Networks Analysis, Management and Security (SNAMS), pages 88-94. IEEE.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter", "authors": [ { "first": "Victor", "middle": [], "last": "Sanh", "suffix": "" }, { "first": "Lysandre", "middle": [], "last": "Debut", "suffix": "" }, { "first": "Julien", "middle": [], "last": "Chaumond", "suffix": "" }, { "first": "Thomas", "middle": [], "last": "Wolf", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1910.01108" ] }, "num": null, "urls": [], "raw_text": "Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2019. Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "The risk of racial bias in hate speech detection", "authors": [ { "first": "Maarten", "middle": [], "last": "Sap", "suffix": "" }, { "first": "Dallas", "middle": [], "last": "Card", "suffix": "" }, { "first": "Saadia", "middle": [], "last": "Gabriel", "suffix": "" }, { "first": "Yejin", "middle": [], "last": "Choi", "suffix": "" }, { "first": "Noah A", "middle": [], "last": "Smith", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "1668--1678", "other_ids": {}, "num": null, "urls": [], "raw_text": "Maarten Sap, Dallas Card, Saadia Gabriel, Yejin Choi, and Noah A Smith. 2019. The risk of racial bias in hate speech detection. In Proceedings of the 57th Annual Meeting of the Association for Compu- tational Linguistics, pages 1668-1678.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "A survey on hate speech detection using natural language processing", "authors": [ { "first": "Anna", "middle": [], "last": "Schmidt", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Wiegand", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the Fifth International workshop on natural language processing for social media", "volume": "", "issue": "", "pages": "1--10", "other_ids": {}, "num": null, "urls": [], "raw_text": "Anna Schmidt and Michael Wiegand. 2017. A survey on hate speech detection using natural language pro- cessing. In Proceedings of the Fifth International workshop on natural language processing for social media, pages 1-10.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "A disciplined approach to neural network hyper-parameters: Part 1-learning rate, batch size, momentum, and weight decay", "authors": [ { "first": "N", "middle": [], "last": "Leslie", "suffix": "" }, { "first": "", "middle": [], "last": "Smith", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1803.09820" ] }, "num": null, "urls": [], "raw_text": "Leslie N Smith. 2018. A disciplined approach to neu- ral network hyper-parameters: Part 1-learning rate, batch size, momentum, and weight decay. arXiv preprint arXiv:1803.09820.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Detecting east asian prejudice on social media", "authors": [ { "first": "Bertie", "middle": [], "last": "Vidgen", "suffix": "" }, { "first": "Austin", "middle": [], "last": "Botelho", "suffix": "" }, { "first": "David", "middle": [], "last": "Broniatowski", "suffix": "" }, { "first": "Ella", "middle": [], "last": "Guest", "suffix": "" }, { "first": "Matthew", "middle": [], "last": "Hall", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bertie Vidgen, Austin Botelho, David Broniatowski, Ella Guest, Matthew Hall, Helen Margetts, Rebekah Tromble, Zeerak Waseem, and Scott Hale. 2020. De- tecting east asian prejudice on social media.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Directions in abusive language training data: Garbage in, garbage out", "authors": [ { "first": "Bertie", "middle": [], "last": "Vidgen", "suffix": "" }, { "first": "Leon", "middle": [], "last": "Derczynski", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2004.01670" ] }, "num": null, "urls": [], "raw_text": "Bertie Vidgen and Leon Derczynski. 2020. Direc- tions in abusive language training data: Garbage in, garbage out. arXiv preprint arXiv:2004.01670.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Challenges and frontiers in abusive content detection", "authors": [ { "first": "Bertie", "middle": [], "last": "Vidgen", "suffix": "" }, { "first": "Alex", "middle": [], "last": "Harris", "suffix": "" }, { "first": "Dong", "middle": [], "last": "Nguyen", "suffix": "" }, { "first": "Rebekah", "middle": [], "last": "Tromble", "suffix": "" }, { "first": "Scott", "middle": [], "last": "Hale", "suffix": "" }, { "first": "Helen", "middle": [], "last": "Margetts", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bertie Vidgen, Alex Harris, Dong Nguyen, Rebekah Tromble, Scott Hale, and Helen Margetts. 2019a. Challenges and frontiers in abusive content detec- tion. Association for Computational Linguistics.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "How much online abuse is there? a systematic review of evidence for the uk", "authors": [ { "first": "Bertie", "middle": [], "last": "Vidgen", "suffix": "" }, { "first": "Helen", "middle": [], "last": "Margetts", "suffix": "" }, { "first": "Alex", "middle": [], "last": "Harris", "suffix": "" } ], "year": 2019, "venue": "The Alan Turing Institute", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bertie Vidgen, Helen Margetts, and Alex Harris. 2019b. How much online abuse is there? a systematic re- view of evidence for the uk. The Alan Turing Insti- tute.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Are you a racist or am i seeing things? annotator influence on hate speech detection on twitter", "authors": [ { "first": "Zeerak", "middle": [], "last": "Waseem", "suffix": "" } ], "year": 2016, "venue": "Proc. 1st Workshop on NLP and Computational Social Science", "volume": "", "issue": "", "pages": "138--142", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zeerak Waseem. 2016. Are you a racist or am i seeing things? annotator influence on hate speech detection on twitter. In Proc. 1st Workshop on NLP and Com- putational Social Science, pages 138-142.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Understanding abuse: A typology of abusive language detection subtasks", "authors": [ { "first": "Zeerak", "middle": [], "last": "Waseem", "suffix": "" }, { "first": "Thomas", "middle": [], "last": "Davidson", "suffix": "" }, { "first": "Dana", "middle": [], "last": "Warmsley", "suffix": "" }, { "first": "Ingmar", "middle": [], "last": "Weber", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1705.09899" ] }, "num": null, "urls": [], "raw_text": "Zeerak Waseem, Thomas Davidson, Dana Warmsley, and Ingmar Weber. 2017. Understanding abuse: A typology of abusive language detection subtasks. arXiv preprint arXiv:1705.09899.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Investigating annotator bias with a graphbased approach", "authors": [ { "first": "Maximilian", "middle": [], "last": "Wich", "suffix": "" }, { "first": "Al", "middle": [], "last": "Hala", "suffix": "" }, { "first": "Georg", "middle": [], "last": "Kuwatly", "suffix": "" }, { "first": "", "middle": [], "last": "Groh", "suffix": "" } ], "year": 2020, "venue": "Proc. 4th Workshop on Online Abuse and Harms", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Maximilian Wich, Hala Al Kuwatly, and Georg Groh. 2020a. Investigating annotator bias with a graph- based approach. In Proc. 4th Workshop on Online Abuse and Harms.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Impact of politically biased data on hate speech classification", "authors": [ { "first": "Maximilian", "middle": [], "last": "Wich", "suffix": "" }, { "first": "Jan", "middle": [], "last": "Bauer", "suffix": "" }, { "first": "Georg", "middle": [], "last": "Groh", "suffix": "" } ], "year": 2020, "venue": "Proc. 4th Workshop on Online Abuse and Harms", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Maximilian Wich, Jan Bauer, and Georg Groh. 2020b. Impact of politically biased data on hate speech clas- sification. In Proc. 4th Workshop on Online Abuse and Harms.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Detection of abusive language: the problem of biased datasets", "authors": [ { "first": "Michael", "middle": [], "last": "Wiegand", "suffix": "" }, { "first": "Josef", "middle": [], "last": "Ruppenhofer", "suffix": "" }, { "first": "Thomas", "middle": [], "last": "Kleinbauer", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "602--608", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michael Wiegand, Josef Ruppenhofer, and Thomas Kleinbauer. 2019. Detection of abusive language: the problem of biased datasets. In Proceedings of the 2019 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 602-608.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Research:detox/data release", "authors": [ { "first": "", "middle": [ "N" ], "last": "Wikimedia", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wikimedia. n.d. Research:detox/data release. https: //meta.wikimedia.org/wiki/Research: Detox/Data_Release.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Ex machina: Personal attacks seen at scale", "authors": [ { "first": "Ellery", "middle": [], "last": "Wulczyn", "suffix": "" }, { "first": "Nithum", "middle": [], "last": "Thain", "suffix": "" }, { "first": "Lucas", "middle": [], "last": "Dixon", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 26th International Conference on World Wide Web", "volume": "", "issue": "", "pages": "1391--1399", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ellery Wulczyn, Nithum Thain, and Lucas Dixon. 2017. Ex machina: Personal attacks seen at scale. In Pro- ceedings of the 26th International Conference on World Wide Web, pages 1391-1399.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Demoting racial bias in hate speech detection", "authors": [ { "first": "Mengzhou", "middle": [], "last": "Xia", "suffix": "" }, { "first": "Anjalie", "middle": [], "last": "Field", "suffix": "" }, { "first": "Yulia", "middle": [], "last": "Tsvetkov", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2005.12246" ] }, "num": null, "urls": [], "raw_text": "Mengzhou Xia, Anjalie Field, and Yulia Tsvetkov. 2020. Demoting racial bias in hate speech detection. arXiv preprint arXiv:2005.12246.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "text": "(a) Gender groups classifiers evaluated on gender groups test sets (b) Language groups classifiers evaluated on language groups test sets (c) Age groups classifiers evaluated on age groups test sets (d) Education groups classifiers evaluated on education groups test sets", "type_str": "figure", "uris": null }, "FIGREF1": { "num": null, "text": "The x-axes are the specificity of the classifiers, and the y-axes are the sensitivity. Each transparent dot represents the specificity and sensitivity of each of the 20 classifiers trained for each group on the respective train set (dot marker) and evaluated on the respective test set (sub-figures). The opaque dots represent the average values.", "type_str": "figure", "uris": null }, "TABREF1": { "text": "Number of comments in each demographic feature's datasets", "type_str": "table", "content": "", "num": null, "html": null }, "TABREF3": { "text": "Average F1 scores of the classifiers.", "type_str": "table", "content": "
Featurep-value
Gender8.3 \u00d7 10 \u22121
First Language 1.0 \u00d7 10 \u22123
Age group1.1 \u00d7 10 \u22128
Education1.4 \u00d7 10 \u22127
", "num": null, "html": null }, "TABREF4": { "text": "Results of the Kolmogorov-Smirnov test, inputs to the tests are the F1 scores of the 20 classifiers evaluated on the mixed test set of each feature.", "type_str": "table", "content": "", "num": null, "html": null }, "TABREF6": { "text": "Inter-rater agreement for all groups", "type_str": "table", "content": "
", "num": null, "html": null } } } }