ACL-OCL / Base_JSON /prefixA /json /alw /2020.alw-1.22.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
raw
history blame contribute delete
No virus
89.9 kB
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T02:11:50.300108Z"
},
"title": "Investigating Annotator Bias with a Graph-Based Approach",
"authors": [
{
"first": "Maximilian",
"middle": [],
"last": "Wich",
"suffix": "",
"affiliation": {},
"email": "maximilian.wich@tum.de"
},
{
"first": "T",
"middle": [
"U"
],
"last": "Munich",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Hala",
"middle": [],
"last": "Al Kuwatly",
"suffix": "",
"affiliation": {},
"email": "hala.kuwatly@tum.de"
},
{
"first": "Georg",
"middle": [],
"last": "Groh",
"suffix": "",
"affiliation": {},
"email": "grohg@in.tum.de"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "A challenge that many online platforms face is hate speech or any other form of online abuse. To cope with this, hate speech detection systems are developed based on machine learning to reduce manual work for monitoring these platforms. Unfortunately, machine learning is vulnerable to unintended bias in training data, which could have severe consequences, such as a decrease in classification performance or unfair behavior (e.g., discriminating minorities). In the scope of this study, we want to investigate annotator bias-a form of bias that annotators cause due to different knowledge in regards to the task and their subjective perception. Our goal is to identify annotation bias based on similarities in the annotation behavior from annotators. To do so, we build a graph based on the annotations from the different annotators, apply a community detection algorithm to group the annotators, and train for each group classifiers whose performances we compare. By doing so, we are able to identify annotator bias within a data set. The proposed method and collected insights can contribute to developing fairer and more reliable hate speech classification models.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "A challenge that many online platforms face is hate speech or any other form of online abuse. To cope with this, hate speech detection systems are developed based on machine learning to reduce manual work for monitoring these platforms. Unfortunately, machine learning is vulnerable to unintended bias in training data, which could have severe consequences, such as a decrease in classification performance or unfair behavior (e.g., discriminating minorities). In the scope of this study, we want to investigate annotator bias-a form of bias that annotators cause due to different knowledge in regards to the task and their subjective perception. Our goal is to identify annotation bias based on similarities in the annotation behavior from annotators. To do so, we build a graph based on the annotations from the different annotators, apply a community detection algorithm to group the annotators, and train for each group classifiers whose performances we compare. By doing so, we are able to identify annotator bias within a data set. The proposed method and collected insights can contribute to developing fairer and more reliable hate speech classification models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "A massive problem that online platforms face nowadays is online abuse (e.g., hate speech against women, Muslims, or African Americans). It is a severe issue for our society because it can cause more than poisoning the platform's atmosphere. For example, Williams et al. (2020) showed a relation between online hate and physical crime.",
"cite_spans": [
{
"start": 254,
"end": 276,
"text": "Williams et al. (2020)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Therefore, people have started to develop systems to automatically detect hate speech or abusive language. The advances in machine learning and deep learning have improved these systems tremendously, but there is still much space for enhancements because it is a challenging and complex task (Fortuna and Nunes, 2018; Schmidt and Wiegand, 2017) .",
"cite_spans": [
{
"start": 292,
"end": 317,
"text": "(Fortuna and Nunes, 2018;",
"ref_id": "BIBREF8"
},
{
"start": 318,
"end": 344,
"text": "Schmidt and Wiegand, 2017)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "A weakness of these systems is their vulnerability to unintended bias that can cause an unfair behavior of the systems (e.g., discrimination of minorities) (Dixon et al., 2018; Vidgen et al., 2019) . Researchers have identified different types and sources of bias that can influence the performance of hate speech detection models. Davidson et al. (2019) , for example, investigated racial bias in hate speech data sets. Wiegand et al. (2019) showed that topic bias and author bias of data sets could impair the performance of hate speech classifiers. examined the impact of political bias within the data on the classifier's performance. To mitigate bias in training data, Dixon et al. (2018) and Borkan et al. (2019) developed an approach.",
"cite_spans": [
{
"start": 156,
"end": 176,
"text": "(Dixon et al., 2018;",
"ref_id": "BIBREF7"
},
{
"start": 177,
"end": 197,
"text": "Vidgen et al., 2019)",
"ref_id": "BIBREF21"
},
{
"start": 332,
"end": 354,
"text": "Davidson et al. (2019)",
"ref_id": "BIBREF6"
},
{
"start": 421,
"end": 442,
"text": "Wiegand et al. (2019)",
"ref_id": "BIBREF24"
},
{
"start": 674,
"end": 693,
"text": "Dixon et al. (2018)",
"ref_id": "BIBREF7"
},
{
"start": 698,
"end": 718,
"text": "Borkan et al. (2019)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Another type of bias that caught researchers' attention is annotator bias. It is caused by the subjective perception and different knowledge levels of annotators regarding the annotation task (Ross et al., 2017; Waseem, 2016; Geva et al., 2019) . Such a bias could harm the generalizability of classification models (Geva et al., 2019) . Especially in the context of online abuse and hate speech, it can be a severe issue because annotating abusive language requires expert knowledge due to the vagueness of the task (Ross et al., 2017; Waseem, 2016) . Nevertheless, due to the limited resources and the demand for large datasets, annotating is often outsourced to crowdsourcing platforms (Vidgen and Derczynski, 2020) . Therefore, we want to investigate this phenomenon in our paper. There is already research concerning annotator bias in hate speech and online abuse detection. Ross et al. (2017) examined the relevance of instructing annotators for hate speech annotations. Waseem (2016) compared the impact of amateur and expert annotators. One of their findings was that a system trained with data labeled by experts outperforms one trained with data labeled by amateurs. Binns et al. (2017) investigated whether there is a performance difference between classifiers trained on data labeled by males and females. Al Kuwatly et al. (2020) extended this approach and investigated the relevance of annotators' educational background, age, and mother tongue in the context of bias. Sap et al. (2019) examined racial bias in hate speech data sets and its impact on the classification performance. To the best of our knowledge, no one has investigated annotator bias by identifying patterns in the annotation behavior through an unsupervised approach. That is why we address the following research question in the paper: Is it possible to identify annotator bias purely on the annotation behavior using graphs and classification models?",
"cite_spans": [
{
"start": 192,
"end": 211,
"text": "(Ross et al., 2017;",
"ref_id": "BIBREF13"
},
{
"start": 212,
"end": 225,
"text": "Waseem, 2016;",
"ref_id": "BIBREF22"
},
{
"start": 226,
"end": 244,
"text": "Geva et al., 2019)",
"ref_id": "BIBREF9"
},
{
"start": 316,
"end": 335,
"text": "(Geva et al., 2019)",
"ref_id": "BIBREF9"
},
{
"start": 517,
"end": 536,
"text": "(Ross et al., 2017;",
"ref_id": "BIBREF13"
},
{
"start": 537,
"end": 550,
"text": "Waseem, 2016)",
"ref_id": "BIBREF22"
},
{
"start": 689,
"end": 718,
"text": "(Vidgen and Derczynski, 2020)",
"ref_id": "BIBREF20"
},
{
"start": 880,
"end": 898,
"text": "Ross et al. (2017)",
"ref_id": "BIBREF13"
},
{
"start": 977,
"end": 990,
"text": "Waseem (2016)",
"ref_id": "BIBREF22"
},
{
"start": 1177,
"end": 1196,
"text": "Binns et al. (2017)",
"ref_id": "BIBREF1"
},
{
"start": 1321,
"end": 1342,
"text": "Kuwatly et al. (2020)",
"ref_id": "BIBREF0"
},
{
"start": 1483,
"end": 1500,
"text": "Sap et al. (2019)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our contribution is the following:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 A novel approach for grouping annotators according to their annotations behavior through graphs and analyzing the different groups in order to identify annotator bias.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 A comparison of different weight functions for constructing the annotator graph modeling the annotator behavior.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "For our study, we use the Personal Attacks corpora from the Wikipedia Detox project (Wulczyn et al., 2017) . It contains 115,864 comments from English Wikipedia that were labeled whether they comprise personal attack or not. In total, there are 1,365,217 annotations provided by 4,053 annotators from the crowdsourcing platform Crowdflower -approximately 10 annotations for each comment. Each annotation consists of 5 categories distinguishing between different types of attack: quoting attack, recipient attack, third party attack, other attack, and attack. In our experiments, we only use the 5 th category (attack) because it covers a broader range than the other labels. Its value is 1 if \"the comment contains any form of personal attack\" (Wikimedia, n.d.) . Otherwise it is 0. The corpora also contain demographic information (e.g., gender, age, and education) of 2,190 annotators. But this data is not relevant to our study.",
"cite_spans": [
{
"start": 84,
"end": 106,
"text": "(Wulczyn et al., 2017)",
"ref_id": "BIBREF27"
},
{
"start": 744,
"end": 761,
"text": "(Wikimedia, n.d.)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "2"
},
{
"text": "Our approach is to group annotators according to their annotation behavior and analyze perfor-mance of classification models trained on annotations from these groups. To do so, we firstly group the annotators according to their annotation behavior using a graph. Secondly, we split the data set by the groups and their respective annotations. Thirdly, we train classifiers for each annotator group and then compare their performances. The reader can find a detailed description of the steps in the following 1 :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "3"
},
{
"text": "Creating Annotator Graph",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "3"
},
{
"text": "In the first step, we create an undirected unweighted graph to model the annotation behavior of the annotators (e.g., how similar the annotations of two annotators are). Each node represents an annotator. An edge between two nodes exists if both annotators annotate at least one same data record. Additionally, each edge has a weight that models the similarity between the annotations of the data records. To calculate the weight, we selected four functions that we will compare:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "3"
},
{
"text": "1. Agreement Rate: It is the percentage in which both annotators agree on the annotation for a data record:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "3"
},
{
"text": "a = n agree n agree + n disagree",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "3"
},
{
"text": "where n agree is the number of data records that both annotated and assigned the same labels to and n disagree is the number of data records that both annotated and assigned different labels.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "3"
},
{
"text": "2. Cohen's kappa (Cohen, 1960) : It is often used as a measure for inter-rater reliability.",
"cite_spans": [
{
"start": 17,
"end": 30,
"text": "(Cohen, 1960)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "3"
},
{
"text": "\u03ba = p 0 \u2212 p e 1 \u2212 p e",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "3"
},
{
"text": "where p 0 is the \"proportion of observed agreements\" (Sim and Wright, 2005, p.258) among the data records annotated by both annotators and p e is \"proportion of agreements expected by chance\" (Sim and Wright, 2005, p.258) among the records. The range of \u03ba is between \u22121 and +1. +1 corresponds perfect agreement; \u2264 0 means agreement at chance or no agreement (Cohen, 1960) . If both annotators select the same label for all records, \u03ba is not defined. In this case, we remove the edge. An alternative would be to keep the edge and assign 1. But we rejected this idea because of the following consideration. Let us assume that we have 4 annotators (A,B,C, and D). A and B assigned the same label to the same comment. C and D assigned the same labels to the same 20 comments. In both cases, \u03ba is not defined. Assigning the same value (e.g., 1) to both edges would weigh both equally. But the edge between C and D should receive a higher weight because the agreement between A and B could be a coincidence.",
"cite_spans": [
{
"start": 53,
"end": 82,
"text": "(Sim and Wright, 2005, p.258)",
"ref_id": null
},
{
"start": 192,
"end": 221,
"text": "(Sim and Wright, 2005, p.258)",
"ref_id": null
},
{
"start": 358,
"end": 371,
"text": "(Cohen, 1960)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "3"
},
{
"text": "3. Krippendorff's alpha (Krippendorff, 2004) : It is another inter-rater reliability measure, which is defined as follows:",
"cite_spans": [
{
"start": 24,
"end": 44,
"text": "(Krippendorff, 2004)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "3"
},
{
"text": "\u03b1 = 1 \u2212 D 0 D e",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "3"
},
{
"text": "\"where D 0 is the observed disagreement among values assigned to units of analysis [...] and D e is the disagreement one would expect when the coding of units is attributable to chance rather than to the properties of these units\" (Krippendorff, 2011, p.1) . Further details of the calculation are provided by Krippendorff (2011). Similar to \u03ba, \u03b1 is not defined if the annotators choose the same label for all records. We handle this case in the same way as above.",
"cite_spans": [
{
"start": 83,
"end": 88,
"text": "[...]",
"ref_id": null
},
{
"start": 231,
"end": 256,
"text": "(Krippendorff, 2011, p.1)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "3"
},
{
"text": "To overcome the undefined issue, we define a heuristic weight function taking the relative agreement rate and the number of commonly annotated data records (overlap) between two annotators into account. The function is defined by four boundary points:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Heuristic:",
"sec_num": "4."
},
{
"text": "\u2022 The maximum weight (1.0) is reached, if two annotators commonly annotated n data records and agree on all annotations. n is the maximal number of data records that is commonly annotated by two annotators and is defined by the data set. \u2022 The minimum weight (0) is reached, if two annotators commonly annotated n data records and disagree on all annotations. \u2022 A weight that is 20% larger than the minimum weight (0.2) is reached, if two an-notators commonly annotated only one data record and disagree.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Heuristic:",
"sec_num": "4."
},
{
"text": "\u2022 A weight that is 60% larger than the minimum weight (0.6) is reached, if two annotators commonly annotated only one data record and agree.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Heuristic:",
"sec_num": "4."
},
{
"text": "The transition between the four boundary points is gradually calculated. The algorithm can be found in the appendix. The purpose of the approach is to consider the overlap besides the agreement rate because the larger the overlap the more reliable is the agreement rate. Cohen's alpha and Krippendorff's alpha provide this, but their weakness is the undefined issue, which is a realistic scenario for our annotation task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Heuristic:",
"sec_num": "4."
},
{
"text": "All weight functions are normalized between 0 and 1 to make the results comparable, if they are not already in this range.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Heuristic:",
"sec_num": "4."
},
{
"text": "The goal of the next step is to group the annotators according to their annotation behavior. For this purpose, we apply the Louvain method, an unsupervised algorithm for detecting communities in a graph (Blondel et al., 2008) . After that, we filter the communities with at least 250 members. Otherwise, the groups do not comprise enough data records that were annotated by their members in order to train a classification model.",
"cite_spans": [
{
"start": 203,
"end": 225,
"text": "(Blondel et al., 2008)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Detecting Annotator Groups",
"sec_num": null
},
{
"text": "After detecting the groups, we split the comments and annotations according to the groups. For each weight function and the corresponding graph, we do the following: We select those comments that were annotated by at least one member of every group. For each group, we create a data set containing these comments and the annotations from the group's members. The label for each comment is the majority vote of the group's annotators. In addition, we create a further data set that serves as a baseline and is called group 0 for all experiments. The data set contains the same comments, but the labels are the results of all 4,053 annotators. After that, all data sets for a weight function are split in a training and test set in the same manner to ensure the comparability of the data sets. This is done for each of the four weight functions. For the classification model, we use a pre-trained DistilBERT that we fine-tune for our task (Sanh et al., 2019) . It is smaller and faster to train than classical BERT, but it provides comparable performance (Sanh et al., 2019) . In the context of abusive language detection, it shows a similar performance like larger BERT models ). Since we need to train several models for different weight functions and groups, we choose the lighter model. The basis of our classification model is the pre-trained distilbert-base-uncased, which is the distilled version version of bert-base-uncased. It has 6 layers, a hidden size of 768, 12 self-attention heads, and 66M parameters. To fine-tune the model for our task, we apply the 1cycle learning rate policy suggested by Smith (2018) with a learning rate of 5e-6 for 2 epochs. The batch size is 64 and the size of the validation set is 10% of the training set. Furthermore, we limit the number of input tokens to 150. The task that DistilBERT is fine-tuned for is to distinguish between the labels \"ATTACK\" and \"OTHER\".",
"cite_spans": [
{
"start": 937,
"end": 956,
"text": "(Sanh et al., 2019)",
"ref_id": "BIBREF14"
},
{
"start": 1053,
"end": 1072,
"text": "(Sanh et al., 2019)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Splitting Data According to Groups",
"sec_num": null
},
{
"text": "After training the models, we compare their performances (F1 macro). For this purpose, each model is evaluated on its own test set and the one from the other groups including group 0, which represents all annotators. Instead of reporting the F1 score, we report them relatively to our baseline (group 0) because it allows a better comparison of the results. Additionally, the actual F1 score are not relevant for this analysis.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Splitting Data According to Groups",
"sec_num": null
},
{
"text": "The experiments show that our proposed method enables the grouping of annotators according to similar annotation behavior. Classifiers separately trained on data from the different groups and evaluated with the other groups' test data exhibit noticeable differences in classification performance, which confirms our approach. The detailed results can be found in the following:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4"
},
{
"text": "We created one graph for each weight function. Table 1 provides the key metrics of the generated graphs. It is conspicuous that the graphs with Cohen's kappa and Krippendorff's alpha weight function have only 691,229 edges, while the other twos have 1,560,078. This difference also causes the divergence of the average degree and density. The reason for the difference is that many relations between two annotators comprise only one comment. If both agree on an annotation, Cohen's kappa and Krippendorff's alpha are not defined; consequently, we do not have an edge. Therefore, graphs with these weight functions have fewer edges. Table 2 shows the results of community detection. While the Louvain algorithm split the graphs with the Agreement Rate and Heuristic Function as weight functions in 4 groups, 10 groups in the graph with Cohen's kappa and with Krippendorff's alpha were detected. An explanation for the divergence is the difference between the number of edges of the graphs. Since the groups have various numbers of members, we select only these with at least 250 annotators due to two reasons. By doing so, we ensure that we have enough annotated comments to train the classifiers. It may be noted at this juncture that only comments were selected for the training/test set if they were annotated by the group. Therefore, groups with a small number of annotators would have reduced the size of the training/test set. The distribution of the size of the training/test set is similar to the one of the numbers of identified groups. For Agreement Rate we have 69,792 annotated comments for the training set and 17,448 for the test set, for the Heuristic Function 69,696 and 17,174, for the Cohen's kappa 19,736 and 4,934, and for Krippendorff's alpha 17,941 and 4,485. The smaller data sizes for the last two are related to the smaller average size of groups. To compare the different groups, we computed the inter-rater agreement for each group and between the groups by using Krippendorff's alpha. To calculate the rate between the groups, we compute Krippendorff's alpha using the union of all annotations from both groups. The inter-rater agreement scores (in percent, 100% means perfect agreement) for all four weight functions are depicted in Figure 1. The first column of each subfigure shows the inter-rater agreement within each group. The 4/5 columns right to the line provide the inter-rater agreement between the groups, and the last column shows the average inter-rater agreement between the groups. Please note that the inter-rater agreement scores are hard to compare between the different weight functions/subfigures because the groups, the comments, and the annotations are different. To a certain degree, the results of the Agreement Rate and the Heuristic Function are comparable and the one of Cohen's kappa and Krippendorff's alpha because these pairs have the same number of groups and a similar number of comments. If we look at the inter-rater agreement within the groups (first column of each subfigure), we see that the groups exhibit varying scores and that the deviations to the baseline (group 0, data set average) also differ. If the score is higher than the baseline, the group is more coherent in regards to the annotations. If it is lower, the group is less coherent. Furthermore, the more scores are higher than the baseline, the better because it means that the algorithm is able to create more coherent groups.",
"cite_spans": [],
"ref_spans": [
{
"start": 47,
"end": 54,
"text": "Table 1",
"ref_id": "TABREF1"
},
{
"start": 632,
"end": 639,
"text": "Table 2",
"ref_id": "TABREF3"
},
{
"start": 2261,
"end": 2267,
"text": "Figure",
"ref_id": null
}
],
"eq_spans": [],
"section": "Annotator Graph",
"sec_num": null
},
{
"text": "Considering these aspects, we can say that Krippendorff's alpha and Heuristic Function produce better results than the other two. In the case of the Heuristic Function, the distance between the lowest and highest inter-rater reliability score (49.2% vs. 39.8%) is larger than the one of the Agreement Rate. In the case of Krippendorff's alpha, the distance between the lowest and highest score is the same as for Cohen's kappa. However, groups 3 and 4 of Krippendorff's alpha (49.8% and 50.0%) have higher scores than the two groups of Cohen's kappa with the highest inter-rater reliability (49.5% and Figure 1 : Inter-rater agreement within and between groups for different weight functions 48.7%). In both cases, the distance function is able to split the annotators into more coherent groups and a remainder than the other distance function. Since both distance functions and their results are hard to compare due to a different number of comments and groups, we choose both (Krippendorff's alpha and Heuristic Function) for the last part of the experiment.",
"cite_spans": [],
"ref_spans": [
{
"start": 602,
"end": 610,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Annotator Graph",
"sec_num": null
},
{
"text": "Instead of reporting the macro-F1 scores for the classifiers trained on the different group-specific training sets and tested on the all group-specific test sets, we report them relatively to the baseline (trained on group 0 and tested on group 0) for easier comparison, as depicted on Figure 2 . The baseline for Krippendorff's alpha has a macro-F1 score of 85.27%, the one of Heuristic Function 88.57%. In addition to the relative scores, the figures contain an extra column and row with average values for better comparability. It is conspicuous that the deviations reported in the first column of each matrix are lower than the rest. The reason is the following: These columns report the performances of the classifiers for the different groups on the baseline test set. Since the baseline test set has the largest number of annotations, the labels are more coherent. Consequently, classifiers perform better on the baseline test set than on their own, less coherent test sets. Figure 2a shows the results for Krippendorff's alpha distance function. The first observation is that the classifiers of groups 1 (+0.54) and 4 (+0.32) perform better on the baseline test set (group 0) than the baseline classifier. We can ascribe this to the fact that group 1 (48.4%) and group 4 (49.8%) have higher inter-rater reliability scores than the baseline (44.1%), meaning the annotations of groups 1 and 4 are more coherent. However, group 3 shows that higher inter-rater reliability does not directly imply a better performance on the baseline. It has a score of 50.0%, but it performs worse on the baseline test set (-0.23) and all classifiers perform poorly on the test set of group 3. A possible explanation can be that the annotations within the group are coherent but less coherent with respect to all other annotations. Group 2 exhibits the lowest performance on the baseline test set (-0.89) and all classifiers perform poorly on its test set. The reason is the noticeably low inter-rater reliability of 31.4% -the 27 -7.10 -11.99 -10.72 -6.51 0.54 -6.44 -10.18 -10.70 -5.68 -0.89 -6.64 -9.17 -10.07 -6.24 -0.23 -7.33 -10.86 -11.03 -6.76 0.32 -6.64 -12.03 -10.93 -6 lowest of all Krippendorff's alpha's groups. The low score indicates that the community detection algorithm grouped the annotators together whose annotation behavior is less compatible with the other's one.",
"cite_spans": [
{
"start": 2016,
"end": 2166,
"text": "27 -7.10 -11.99 -10.72 -6.51 0.54 -6.44 -10.18 -10.70 -5.68 -0.89 -6.64 -9.17 -10.07 -6.24 -0.23 -7.33 -10.86 -11.03 -6.76 0.32 -6.64 -12.03 -10.93 -6",
"ref_id": null
}
],
"ref_spans": [
{
"start": 286,
"end": 294,
"text": "Figure 2",
"ref_id": "FIGREF3"
},
{
"start": 982,
"end": 991,
"text": "Figure 2a",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Classification Models and Their Performances",
"sec_num": null
},
{
"text": "In the case of the Heuristic Function (cf. Figure 2b) , we can also find a group that performs better on the baseline test set than the baseline classifier and that has a high inter-rater reliability score (49.2%) -group 1 (+0.07). The explanation is the same as the one for groups 1 and 4 of Krippendorff's alpha. The classifier with the largest discrepancy is the one of group 3 (-0.30). This should not be surprising because group 3 has the lowest inter-rater reliability within the group (39.8%) and between the group and the baseline (46.2%). That is also the reason why all groups perform poorly on the test set of group 3. The group is comparable to group 2 of Krippendorff's alpha. Annotators that have an annotation behavior different from the rest are grouped together.",
"cite_spans": [],
"ref_spans": [
{
"start": 43,
"end": 53,
"text": "Figure 2b)",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Classification Models and Their Performances",
"sec_num": null
},
{
"text": "The results show that the proposed method is suitable for identifying annotator groups purely based on annotation behavior. The deviations in interrater agreement rates of the groups and in the classifiers' performances prove this.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5"
},
{
"text": "In regards to the weight functions, we found that both Krippendorff's alpha and Heuristic Function are more suitable than the other functions. Both are able to separate the annotators into different groups based on their annotation behavior. However, it is difficult to choose a winner between both because of missing comparability. An advantage of our Heuristic Function in regards to Krippendorff's alpha as weight functions is that it does not have the undefined issue if two annotators assign only one type of label to the comments to be labeled. A potential improvement could be to combine Krippendorff's alpha weight function with the Heuristic Function.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5"
},
{
"text": "The results of our method can be linked to annotator bias in the following manner: An identified annotator group that has a high inter-rater agreement within the group, but poor classification performance on the other test sets indicates that it has a certain degree of bias as the group's annotation behavior differs from the rest. For such insights, we see currently two possible use cases:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5"
},
{
"text": "\u2022 The insights can be used to mitigate annotator bias. The annotations of these groups can either be weighted differently or deleted to avoid transferring the bias to the classification model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5"
},
{
"text": "\u2022 The insights can be used to build classification models that model the annotator bias. This can be helpful for tasks that do not have one truth but rather multiple perspectives. In the case of online abuse, it is possible that one group is more tolerant towards abusive language and another one less tolerant.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5"
},
{
"text": "The novelty of our approach is that it is unsu-pervised and does not require any stipulation of bias that you want to detect in advance. Existing approaches, such as Binns et al. (2017) , who investigated gender bias, or Sap et al. (2019) and Davidson et al. (2019) , who examined racial bias, defined in their hypothesis which kind of bias they want to uncover. Our method, however, does not require any pre-defined categories to detect bias.",
"cite_spans": [
{
"start": 166,
"end": 185,
"text": "Binns et al. (2017)",
"ref_id": "BIBREF1"
},
{
"start": 221,
"end": 238,
"text": "Sap et al. (2019)",
"ref_id": "BIBREF15"
},
{
"start": 243,
"end": 265,
"text": "Davidson et al. (2019)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5"
},
{
"text": "In this paper, we proposed a novel graph-based method for identifying annotator bias through grouping similar annotation behavior. It differs from existing approaches by its unsupervised nature. But the method requires further research and refinement. To address our limitations, we propose the following future work: Firstly, we used only one data set for our study. The approach, however, should be also tested and refined with other data sets. The Wikipedia Detox project, for example, provides two more data sets with the same structure, but with different tasks (toxicity and aggression). In general, data availability is a challenge of this kind of research because hate speech data sets mostly contain aggregated annotations. Therefore, we urge researchers releasing data sets to provide the unaggregated annotations as well.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "Secondly, other approaches for grouping the annotators should be investigated. We used only one community detection method, the Louvain algorithm. But there are many more methods, such as the Girvan-Newman algorithm (Girvan and Newman, 2002) and the Clauset-Newman-Moore algorithm (Clauset et al., 2004) .",
"cite_spans": [
{
"start": 216,
"end": 241,
"text": "(Girvan and Newman, 2002)",
"ref_id": "BIBREF10"
},
{
"start": 281,
"end": 303,
"text": "(Clauset et al., 2004)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "Thirdly, our methods should be extended so that it can handle smaller groups. Our current approach requires at least 250 annotators in a group to ensure that we have enough training data. But it would be interesting to investigate smaller groups in the hope that these groups are more coherent in regards to their annotation behavior.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "Code available on GitHub: https://github.com/ mawic/graph-based-method-annotator-bias",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This research has been partially funded by a scholarship from the Hanns Seidel Foundation financed by the German Federal Ministry of Education and Research.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Identifying and measuring annotator bias based on annotators' demographic characteristics",
"authors": [
{
"first": "Maximilian",
"middle": [],
"last": "Hala Al Kuwatly",
"suffix": ""
},
{
"first": "Georg",
"middle": [],
"last": "Wich",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Groh",
"suffix": ""
}
],
"year": 2020,
"venue": "Proc. 4th Workshop on Online Abuse and Harms",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hala Al Kuwatly, Maximilian Wich, and Georg Groh. 2020. Identifying and measuring annotator bias based on annotators' demographic characteristics. In Proc. 4th Workshop on Online Abuse and Harms.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Like trainer, like bot? inheritance of bias in algorithmic content moderation",
"authors": [
{
"first": "Reuben",
"middle": [],
"last": "Binns",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Veale",
"suffix": ""
},
{
"first": "Max",
"middle": [],
"last": "Van Kleek",
"suffix": ""
},
{
"first": "Nigel",
"middle": [],
"last": "Shadbolt",
"suffix": ""
}
],
"year": 2017,
"venue": "International conference on social informatics",
"volume": "",
"issue": "",
"pages": "405--415",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Reuben Binns, Michael Veale, Max Van Kleek, and Nigel Shadbolt. 2017. Like trainer, like bot? in- heritance of bias in algorithmic content moderation. In International conference on social informatics, pages 405-415. Springer.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Fast unfolding of communities in large networks",
"authors": [
{
"first": "D",
"middle": [],
"last": "Vincent",
"suffix": ""
},
{
"first": "Jean-Loup",
"middle": [],
"last": "Blondel",
"suffix": ""
},
{
"first": "Renaud",
"middle": [],
"last": "Guillaume",
"suffix": ""
},
{
"first": "Etienne",
"middle": [],
"last": "Lambiotte",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lefebvre",
"suffix": ""
}
],
"year": 2008,
"venue": "Journal of statistical mechanics: theory and experiment",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vincent D Blondel, Jean-Loup Guillaume, Renaud Lambiotte, and Etienne Lefebvre. 2008. Fast un- folding of communities in large networks. Jour- nal of statistical mechanics: theory and experiment, 2008(10):P10008.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Nuanced metrics for measuring unintended bias with real data for text classification",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Borkan",
"suffix": ""
},
{
"first": "Lucas",
"middle": [],
"last": "Dixon",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Sorensen",
"suffix": ""
},
{
"first": "Nithum",
"middle": [],
"last": "Thain",
"suffix": ""
},
{
"first": "Lucy",
"middle": [],
"last": "Vasserman",
"suffix": ""
}
],
"year": 2019,
"venue": "Proc. 28th WWW Conf",
"volume": "",
"issue": "",
"pages": "491--500",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel Borkan, Lucas Dixon, Jeffrey Sorensen, Nithum Thain, and Lucy Vasserman. 2019. Nuanced metrics for measuring unintended bias with real data for text classification. In Proc. 28th WWW Conf., pages 491- 500.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Finding community structure in very large networks",
"authors": [
{
"first": "Aaron",
"middle": [],
"last": "Clauset",
"suffix": ""
},
{
"first": "E",
"middle": [
"J"
],
"last": "Mark",
"suffix": ""
},
{
"first": "Cristopher",
"middle": [],
"last": "Newman",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Moore",
"suffix": ""
}
],
"year": 2004,
"venue": "Physical review E",
"volume": "70",
"issue": "6",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aaron Clauset, Mark EJ Newman, and Cristopher Moore. 2004. Finding community structure in very large networks. Physical review E, 70(6):066111.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "A coefficient of agreement for nominal scales. Educational and psychological measurement",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Cohen",
"suffix": ""
}
],
"year": 1960,
"venue": "",
"volume": "20",
"issue": "",
"pages": "37--46",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Cohen. 1960. A coefficient of agreement for nominal scales. Educational and psychological mea- surement, 20(1):37-46.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Racial bias in hate speech and abusive language detection datasets",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Davidson",
"suffix": ""
},
{
"first": "Debasmita",
"middle": [],
"last": "Bhattacharya",
"suffix": ""
},
{
"first": "Ing",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1905.12516"
]
},
"num": null,
"urls": [],
"raw_text": "Thomas Davidson, Debasmita Bhattacharya, and Ing- mar Weber. 2019. Racial bias in hate speech and abusive language detection datasets. arXiv preprint arXiv:1905.12516.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Measuring and mitigating unintended bias in text classification",
"authors": [
{
"first": "Lucas",
"middle": [],
"last": "Dixon",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Sorensen",
"suffix": ""
},
{
"first": "Nithum",
"middle": [],
"last": "Thain",
"suffix": ""
},
{
"first": "Lucy",
"middle": [],
"last": "Vasserman",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society",
"volume": "",
"issue": "",
"pages": "67--73",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lucas Dixon, John Li, Jeffrey Sorensen, Nithum Thain, and Lucy Vasserman. 2018. Measuring and mitigat- ing unintended bias in text classification. In Pro- ceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society, pages 67-73.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "A survey on automatic detection of hate speech in text",
"authors": [
{
"first": "Paula",
"middle": [],
"last": "Fortuna",
"suffix": ""
},
{
"first": "S\u00e9rgio",
"middle": [],
"last": "Nunes",
"suffix": ""
}
],
"year": 2018,
"venue": "ACM Comput. Surv",
"volume": "51",
"issue": "4",
"pages": "",
"other_ids": {
"DOI": [
"10.1145/3232676"
]
},
"num": null,
"urls": [],
"raw_text": "Paula Fortuna and S\u00e9rgio Nunes. 2018. A survey on au- tomatic detection of hate speech in text. ACM Com- put. Surv., 51(4).",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Are we modeling the task or the annotator? an investigation of annotator bias in natural language understanding datasets",
"authors": [
{
"first": "Mor",
"middle": [],
"last": "Geva",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "Berant",
"suffix": ""
}
],
"year": 2019,
"venue": "2019 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1161--1166",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mor Geva, Yoav Goldberg, and Jonathan Berant. 2019. Are we modeling the task or the annotator? an in- vestigation of annotator bias in natural language un- derstanding datasets. In 2019 Conference on Empiri- cal Methods in Natural Language Processing, pages 1161-1166.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Community structure in social and biological networks",
"authors": [
{
"first": "Michelle",
"middle": [],
"last": "Girvan",
"suffix": ""
},
{
"first": "E",
"middle": [
"J"
],
"last": "Mark",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Newman",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the national academy of sciences",
"volume": "99",
"issue": "",
"pages": "7821--7826",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michelle Girvan and Mark EJ Newman. 2002. Com- munity structure in social and biological networks. Proceedings of the national academy of sciences, 99(12):7821-7826.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Content Analysis: An Introduction to Its Methodology",
"authors": [
{
"first": "K",
"middle": [],
"last": "Krippendorff",
"suffix": ""
}
],
"year": 2004,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "K. Krippendorff. 2004. Content Analysis: An Introduc- tion to Its Methodology. Content Analysis: An In- troduction to Its Methodology. Sage.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Computing krippendorff's alpha-reliability",
"authors": [
{
"first": "Klaus",
"middle": [],
"last": "Krippendorff",
"suffix": ""
}
],
"year": 2011,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Klaus Krippendorff. 2011. Computing krippendorff's alpha-reliability.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Measuring the reliability of hate speech annotations: The case of the european refugee crisis",
"authors": [
{
"first": "Bj\u00f6rn",
"middle": [],
"last": "Ross",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Rist",
"suffix": ""
},
{
"first": "Guillermo",
"middle": [],
"last": "Carbonell",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Cabrera",
"suffix": ""
},
{
"first": "Nils",
"middle": [],
"last": "Kurowsky",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Wojatzki",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1701.08118"
]
},
"num": null,
"urls": [],
"raw_text": "Bj\u00f6rn Ross, Michael Rist, Guillermo Carbonell, Ben- jamin Cabrera, Nils Kurowsky, and Michael Wo- jatzki. 2017. Measuring the reliability of hate speech annotations: The case of the european refugee crisis. arXiv preprint arXiv:1701.08118.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter",
"authors": [
{
"first": "Victor",
"middle": [],
"last": "Sanh",
"suffix": ""
},
{
"first": "Lysandre",
"middle": [],
"last": "Debut",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Chaumond",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Wolf",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1910.01108"
]
},
"num": null,
"urls": [],
"raw_text": "Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2019. Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "The risk of racial bias in hate speech detection",
"authors": [
{
"first": "Maarten",
"middle": [],
"last": "Sap",
"suffix": ""
},
{
"first": "Dallas",
"middle": [],
"last": "Card",
"suffix": ""
},
{
"first": "Saadia",
"middle": [],
"last": "Gabriel",
"suffix": ""
},
{
"first": "Yejin",
"middle": [],
"last": "Choi",
"suffix": ""
},
{
"first": "Noah A",
"middle": [],
"last": "Smith",
"suffix": ""
}
],
"year": 2019,
"venue": "Proc. 57th ACL Conf",
"volume": "",
"issue": "",
"pages": "1668--1678",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Maarten Sap, Dallas Card, Saadia Gabriel, Yejin Choi, and Noah A Smith. 2019. The risk of racial bias in hate speech detection. In Proc. 57th ACL Conf., pages 1668-1678.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "A survey on hate speech detection using natural language processing",
"authors": [
{
"first": "Anna",
"middle": [],
"last": "Schmidt",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Wiegand",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the Fifth International Workshop on Natural Language Processing for Social Media",
"volume": "",
"issue": "",
"pages": "1--10",
"other_ids": {
"DOI": [
"10.18653/v1/W17-1101"
]
},
"num": null,
"urls": [],
"raw_text": "Anna Schmidt and Michael Wiegand. 2017. A survey on hate speech detection using natural language pro- cessing. In Proceedings of the Fifth International Workshop on Natural Language Processing for So- cial Media, pages 1-10, Valencia, Spain. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "The kappa statistic in reliability studies: use, interpretation, and sample size requirements",
"authors": [
{
"first": "Julius",
"middle": [],
"last": "Sim",
"suffix": ""
},
{
"first": "Chris C",
"middle": [],
"last": "Wright",
"suffix": ""
}
],
"year": 2005,
"venue": "Physical therapy",
"volume": "85",
"issue": "3",
"pages": "257--268",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Julius Sim and Chris C Wright. 2005. The kappa statis- tic in reliability studies: use, interpretation, and sam- ple size requirements. Physical therapy, 85(3):257- 268.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "A disciplined approach to neural network hyper-parameters: Part 1-learning rate, batch size, momentum, and weight decay",
"authors": [
{
"first": "N",
"middle": [],
"last": "Leslie",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Smith",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1803.09820"
]
},
"num": null,
"urls": [],
"raw_text": "Leslie N Smith. 2018. A disciplined approach to neu- ral network hyper-parameters: Part 1-learning rate, batch size, momentum, and weight decay. arXiv preprint arXiv:1803.09820.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Detecting east asian prejudice on social media",
"authors": [
{
"first": "Bertie",
"middle": [],
"last": "Vidgen",
"suffix": ""
},
{
"first": "Austin",
"middle": [],
"last": "Botelho",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Broniatowski",
"suffix": ""
},
{
"first": "Ella",
"middle": [],
"last": "Guest",
"suffix": ""
},
{
"first": "Matthew",
"middle": [],
"last": "Hall",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bertie Vidgen, Austin Botelho, David Broniatowski, Ella Guest, Matthew Hall, Helen Margetts, Rebekah Tromble, Zeerak Waseem, and Scott Hale. 2020. De- tecting east asian prejudice on social media.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Directions in abusive language training data: Garbage in, garbage out",
"authors": [
{
"first": "Bertie",
"middle": [],
"last": "Vidgen",
"suffix": ""
},
{
"first": "Leon",
"middle": [],
"last": "Derczynski",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2004.01670"
]
},
"num": null,
"urls": [],
"raw_text": "Bertie Vidgen and Leon Derczynski. 2020. Direc- tions in abusive language training data: Garbage in, garbage out. arXiv preprint arXiv:2004.01670.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Challenges and frontiers in abusive content detection",
"authors": [
{
"first": "Bertie",
"middle": [],
"last": "Vidgen",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Harris",
"suffix": ""
},
{
"first": "Dong",
"middle": [],
"last": "Nguyen",
"suffix": ""
},
{
"first": "Rebekah",
"middle": [],
"last": "Tromble",
"suffix": ""
},
{
"first": "Scott",
"middle": [],
"last": "Hale",
"suffix": ""
},
{
"first": "Helen",
"middle": [],
"last": "Margetts",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Third Workshop on Abusive Language Online",
"volume": "",
"issue": "",
"pages": "80--93",
"other_ids": {
"DOI": [
"10.18653/v1/W19-3509"
]
},
"num": null,
"urls": [],
"raw_text": "Bertie Vidgen, Alex Harris, Dong Nguyen, Rebekah Tromble, Scott Hale, and Helen Margetts. 2019. Challenges and frontiers in abusive content detec- tion. In Proceedings of the Third Workshop on Abu- sive Language Online, pages 80-93, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Are You a Racist or Am I Seeing Things? Annotator Influence on Hate Speech Detection on Twitter",
"authors": [
{
"first": "Zeerak",
"middle": [],
"last": "Waseem",
"suffix": ""
}
],
"year": 2016,
"venue": "Proc. First Workshop on NLP and Computational Social Science",
"volume": "",
"issue": "",
"pages": "138--142",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zeerak Waseem. 2016. Are You a Racist or Am I See- ing Things? Annotator Influence on Hate Speech Detection on Twitter. In Proc. First Workshop on NLP and Computational Social Science, pages 138- 142.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Impact of politically biased data on hate speech classification",
"authors": [
{
"first": "Maximilian",
"middle": [],
"last": "Wich",
"suffix": ""
},
{
"first": "Jan",
"middle": [],
"last": "Bauer",
"suffix": ""
},
{
"first": "Georg",
"middle": [],
"last": "Groh",
"suffix": ""
}
],
"year": 2020,
"venue": "Proc. 4th Workshop on Online Abuse and Harms",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Maximilian Wich, Jan Bauer, and Georg Groh. 2020. Impact of politically biased data on hate speech clas- sification. In Proc. 4th Workshop on Online Abuse and Harms.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Detection of Abusive Language: the Problem of Biased Datasets",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Wiegand",
"suffix": ""
},
{
"first": "Josef",
"middle": [],
"last": "Ruppenhofer",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Kleinbauer",
"suffix": ""
}
],
"year": 2019,
"venue": "Proc. 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "602--608",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Wiegand, Josef Ruppenhofer, and Thomas Kleinbauer. 2019. Detection of Abusive Language: the Problem of Biased Datasets. Proc. 2019 Con- ference of the North American Chapter of the Asso- ciation for Computational Linguistics: Human Lan- guage Technologies, Volume 1 (Long and Short Pa- pers), pages 602-608.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Research:detox/data release",
"authors": [
{
"first": "",
"middle": [
"N"
],
"last": "Wikimedia",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wikimedia. n.d. Research:detox/data release. https: //meta.wikimedia.org/wiki/Research: Detox/Data_Release.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Hate in the machine: Anti-black and anti-muslim social media posts as predictors of offline racially and religiously aggravated crime",
"authors": [
{
"first": "L",
"middle": [],
"last": "Matthew",
"suffix": ""
},
{
"first": "Pete",
"middle": [],
"last": "Williams",
"suffix": ""
},
{
"first": "Amir",
"middle": [],
"last": "Burnap",
"suffix": ""
},
{
"first": "Han",
"middle": [],
"last": "Javed",
"suffix": ""
},
{
"first": "Sefa",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ozalp",
"suffix": ""
}
],
"year": 2020,
"venue": "The British Journal of Criminology",
"volume": "60",
"issue": "1",
"pages": "93--117",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew L Williams, Pete Burnap, Amir Javed, Han Liu, and Sefa Ozalp. 2020. Hate in the machine: Anti-black and anti-muslim social media posts as predictors of offline racially and religiously aggra- vated crime. The British Journal of Criminology, 60(1):93-117.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Ex machina: Personal attacks seen at scale",
"authors": [
{
"first": "Ellery",
"middle": [],
"last": "Wulczyn",
"suffix": ""
},
{
"first": "Nithum",
"middle": [],
"last": "Thain",
"suffix": ""
},
{
"first": "Lucas",
"middle": [],
"last": "Dixon",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 26th International Conference on World Wide Web",
"volume": "",
"issue": "",
"pages": "1391--1399",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ellery Wulczyn, Nithum Thain, and Lucas Dixon. 2017. Ex machina: Personal attacks seen at scale. In Pro- ceedings of the 26th International Conference on World Wide Web, pages 1391-1399.",
"links": null
}
},
"ref_entries": {
"FIGREF3": {
"num": null,
"text": "Macro F1 scores relative to the baseline (0,0)",
"uris": null,
"type_str": "figure"
},
"TABREF1": {
"content": "<table><tr><td>Training Classification Models for Groups and</td></tr><tr><td>Comparing Their Performances</td></tr></table>",
"num": null,
"text": "Graph metrics",
"html": null,
"type_str": "table"
},
"TABREF3": {
"content": "<table><tr><td>Community Detection</td></tr></table>",
"num": null,
"text": "Results of community detection",
"html": null,
"type_str": "table"
}
}
}
}