ACL-OCL / Base_JSON /prefixA /json /alw /2020.alw-1.18.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
raw
history blame contribute delete
No virus
152 kB
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T02:11:49.479295Z"
},
"title": "In Data We Trust: A Critical Analysis of Hate Speech Detection Datasets",
"authors": [
{
"first": "Judith",
"middle": [],
"last": "Kosisochukwu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Victoria University of Wellington",
"location": {
"postBox": "PO Box 600",
"postCode": "6012",
"settlement": "Wellington",
"country": "New Zealand"
}
},
"email": "kosisochukwu.madukwe@ecs.vuw.ac.nz"
},
{
"first": "Xiaoying",
"middle": [],
"last": "Madukwe",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Victoria University of Wellington",
"location": {
"postBox": "PO Box 600",
"postCode": "6012",
"settlement": "Wellington",
"country": "New Zealand"
}
},
"email": "xiaoying.gao@ecs.vuw.ac.nz"
},
{
"first": "Bing",
"middle": [],
"last": "Gao",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Victoria University of Wellington",
"location": {
"postBox": "PO Box 600",
"postCode": "6012",
"settlement": "Wellington",
"country": "New Zealand"
}
},
"email": ""
},
{
"first": "",
"middle": [],
"last": "Xue",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Victoria University of Wellington",
"location": {
"postBox": "PO Box 600",
"postCode": "6012",
"settlement": "Wellington",
"country": "New Zealand"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Recently, a few studies have discussed the limitations of datasets collected for the task of detecting hate speech from different viewpoints. We intend to contribute to the conversation by providing a consolidated overview of these issues pertaining to the data that debilitate research in this area. Specifically, we discuss how the varying pre-processing steps and the format for making data publicly available result in highly varying datasets that make an objective comparison between studies difficult and unfair. There is currently no study (to the best of our knowledge) focused on comparing the attributes of existing datasets for hate speech detection, outlining their limitations and recommending approaches for future research. This work intends to fill that gap and become the one-stop shop for information regarding hate speech datasets.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "Recently, a few studies have discussed the limitations of datasets collected for the task of detecting hate speech from different viewpoints. We intend to contribute to the conversation by providing a consolidated overview of these issues pertaining to the data that debilitate research in this area. Specifically, we discuss how the varying pre-processing steps and the format for making data publicly available result in highly varying datasets that make an objective comparison between studies difficult and unfair. There is currently no study (to the best of our knowledge) focused on comparing the attributes of existing datasets for hate speech detection, outlining their limitations and recommending approaches for future research. This work intends to fill that gap and become the one-stop shop for information regarding hate speech datasets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "It is imperative to detect hateful speech on social media platforms and other online spaces because its real life implications are usually dire. The research community working towards achieving this goal spans from the Social Sciences to Computer Science. Under the field of Computer Science, Natural Language Processing (NLP) and Machine Learning (ML) techniques have been applied to this task of detecting hate speech by mostly framing it as a text classification task. Here, text is classified into different categories based on its innate content or features. Text classification is a supervised ML task; which means it requires a considerable amount of labelled data. Each data instance needs a label or a class/category that it belongs to. Although the majority of the studies in this research area use labelled data as they conduct a classification task, there are some that do not (Gao et al., 2017; Xiang et al., 2012) .",
"cite_spans": [
{
"start": 889,
"end": 907,
"text": "(Gao et al., 2017;",
"ref_id": "BIBREF20"
},
{
"start": 908,
"end": 927,
"text": "Xiang et al., 2012)",
"ref_id": "BIBREF54"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this study, we concentrate on datasets for hate speech detection in the English language while briefly highlighting other languages and similar concepts such as cyberbullying and abuse detection. The same issue discussed here are also true in other languages, thus all suggested solutions would persist.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The overall aim of this work is to provide insight into the existing datasets and a consolidated analysis into their strengths and weaknesses and most importantly suggest methods to forward research in this area. To achieve this, we ask several questions:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 What makes a dataset benchmark?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 How do we handle class imbalanced dataset?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In its unbalanced form or not?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 What typology should we follow for hate speech research? What should or shouldn't it include?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 What is the best ethical format for collating and sharing such a sensitive dataset so as to avoid data degradation?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Although this work will be critiquing a few studies, it is not meant to be negative in any form.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The importance of hate speech detection research cannot be overemphasised. Now, more than ever, with the current inflammatory political climate and discourse all around the world and minorities in various locations demanding for equality and equity, we cannot allow additional bias to be introduced into their lives through artificial intelligence. The problem of hate speech detection is one yet to be solved even to an acceptable level. It would be counter-productive if all the research efforts are not focused and channeled towards a better tomorrow by building on top one another. So we were motivated to go back to a root of the problem: the data. One of the foundations of this research work (that we can easily make changes on) is the data set. We can only build solid structures on solid foundations. Furthermore, research efforts would be futile if the proposed state-of-the-art for this task fail to perform well on a realistic dataset.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Motivations",
"sec_num": "2"
},
{
"text": "In this section, we highlight the currently existing datasets used in literature for the task of detecting hate speech. In the broad area of abusive language detection, there exists several other datasets collected and annotated for cyberbullying, toxicity, aggression and so on (we would not discuss those in-depth as they are out of the scope of this work). As highlighted in (Fortuna and Nunes, 2018) , the majority of the studies in this area of hate speech detection collected and annotated their own datasets, however, some were not made publicly available. The existing datasets are:",
"cite_spans": [
{
"start": 378,
"end": 403,
"text": "(Fortuna and Nunes, 2018)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Summary of Existing Datasets",
"sec_num": "3"
},
{
"text": "1. BURNAP Dataset: This dataset collected by (Burnap and Williams, 2016) comprises of cyber-hate targeted at four different protected characteristics (sexual orientation, race, disability and religion) in roughly equal amounts. Of the annotated sample, 10.15% of sexual orientation category, 3.73% of race category, 2.66% of disability category and 11.68% of religion category are considered offensive or antagonistic. The dataset was collected after different trigger events for each category.",
"cite_spans": [
{
"start": 45,
"end": 72,
"text": "(Burnap and Williams, 2016)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Summary of Existing Datasets",
"sec_num": "3"
},
{
"text": "2. WASEEM Dataset 1 : This dataset was published by (Waseem and Hovy, 2016) . It contains 16k English tweets annotated into three classes (1972 are Racism, 3383 are Sexism and 11559 are Neither) and was made publicly available using TweetIDs. The authors annotated the data themselves, then used a third party to validate the annotations. They record an inter-annotator agreement of 0.84. This dataset is unbalanced and also biased toward specific users since all of the tweets labelled as racist where from 9 users only, while the other classes were from more than 600 users. This dataset was extended in (Waseem, 2016) by 4033 additional tweets, were they experimented with amateur and expert annotations 1 https://github.com/ZeerakW/hatespeech to investigate their influence based on an existing knowledge of the research area.",
"cite_spans": [
{
"start": 52,
"end": 75,
"text": "(Waseem and Hovy, 2016)",
"ref_id": "BIBREF50"
},
{
"start": 606,
"end": 620,
"text": "(Waseem, 2016)",
"ref_id": "BIBREF48"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Summary of Existing Datasets",
"sec_num": "3"
},
{
"text": "3. DAVIDSON Dataset 2 : This was published by . The dataset contains 24,802 tweets in English (5.77% labelled as Hate speech, 77.43% as Offensive and 16.80% as Neither) and was published in raw text format. They report collecting this data from Twitter using a lexicon from HateBase 3 containing hateful words and phrases. They used a crowdsourcing platform (Figure-Eight 4 formerly CrowdFlower) for annotating the tweets into the 3 classes. The annotators were provided with the authors' definitions and specific instructions. They record an inter-rater agreement of 92% as provided by the crowdsourcing platform. (Caruana and Niculescu-Mizil, 2006) and to objectively measure progress on a particular problem. The dataset is usually the only necessary consistent/constant aspect of a study. Benchmark datasets have been shown in areas like image processing to be of paramount importance in enabling research progress and a fair/objective comparison between studies and proposed methods. Datasets like CIFAR10, CIFAR100 (Krizhevsky, 2009) and MNIST (LeCun and Cortes, 2010) for image processing and computer vision were published and are maintained by a large research institution. The CIFAR10 and CIFAR100 have designated train and test sets, which makes comparison between studies and proposed methods fair.",
"cite_spans": [
{
"start": 615,
"end": 650,
"text": "(Caruana and Niculescu-Mizil, 2006)",
"ref_id": "BIBREF7"
},
{
"start": 1021,
"end": 1039,
"text": "(Krizhevsky, 2009)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Summary of Existing Datasets",
"sec_num": "3"
},
{
"text": "In Table 1 , we show the state of availability and accessibility of some of the discussed datasets. Making datasets available on personal repositories is problematic because the user can take it down at anytime. For example, a hate speech dataset listed in (Fortuna and Nunes, 2018) on Annie Thorbun's personal github page 15 does not exist anymore. This problem can also occur when a website address changes. For example, in (Watanabe et al., 2018) , one of the dataset used was listed to be at www.crowdflower.com/data-for-everyone/ which now redirects to https://appen.com/reso urces/datasets/. However, the dataset cannot be found as at 19th June, 2020",
"cite_spans": [
{
"start": 257,
"end": 282,
"text": "(Fortuna and Nunes, 2018)",
"ref_id": "BIBREF16"
},
{
"start": 426,
"end": 449,
"text": "(Watanabe et al., 2018)",
"ref_id": "BIBREF52"
}
],
"ref_spans": [
{
"start": 3,
"end": 10,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Dataset Accessibility and Availability",
"sec_num": "4.1"
},
{
"text": "Data degradation occurs when a dataset, published in an encrypted format, needs to be regenerated by the researcher on-demand, does not produce the same number/amount of data as on the publication date. This phenomenon occurs with hate speech data harvested from Twitter and published in form of tweetIDs which are identification number that linked to each individual tweet. In some cases, the author of the tweet deletes it, or the account owner deactivates the account, or it might be reported to Twitter as breaking one of their guidelines and Twitter takes it down. This has been reported in (Zhang and Luo, 2018; Arango et al., 2019) . Also (Watanabe et al., 2018) , noted that the WASEEM dataset had only 6,655 tweets left, out of the 6,909 initially published. (Osho et al., 2020) reported that for FOUNTA dataset they only found 69k out of 80k tweets. As compared to the distribution highlighted in Table 1 , the new distribution over the classes were now 62% normal, 20% as abusive, 14% as spam and 4% as hateful. The hateful class was even more reduced. Both the FOUNTA and WASEEM data suffer from data degradation. As at June 2020, we found that the first batch of WASEEM data was completely degraded while the second batch has only 2,412 out of 6,090 tweets left. We also found that the FOUNTA data has 18,943 tweets out of the 80,000 left. The already minute class of interest bears the brunt of this phenomena.",
"cite_spans": [
{
"start": 596,
"end": 617,
"text": "(Zhang and Luo, 2018;",
"ref_id": "BIBREF55"
},
{
"start": 618,
"end": 638,
"text": "Arango et al., 2019)",
"ref_id": "BIBREF1"
},
{
"start": 646,
"end": 669,
"text": "(Watanabe et al., 2018)",
"ref_id": "BIBREF52"
},
{
"start": 768,
"end": 787,
"text": "(Osho et al., 2020)",
"ref_id": "BIBREF36"
}
],
"ref_spans": [
{
"start": 907,
"end": 914,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Dataset Accessibility and Availability",
"sec_num": "4.1"
},
{
"text": "For a persistent benchmark dataset to succeed, we need to make data available in a better format. The nature of the data and the fact that it provides a consolidated source of harmful information makes it very tricky. Therefore, we suggest a submission portal for the data, where each researcher can request for a copy of the data using a verifiable email address and then a copy of the benchmark dataset is sent to them. This might restrict access for those that might want to use this data for malicious purposes. This service can be provided by large institutional data repositories like Dataverse 16 or ICPSR 17 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset Accessibility and Availability",
"sec_num": "4.1"
},
{
"text": "Unlike most text classification task such as sentiment analysis; hate speech detection suffers from a severe class imbalance issue, with the hate class being in most cases less than 12% for the multi-class datasets and less than half of the total dataset for the binary datasets (Table 1) .",
"cite_spans": [],
"ref_spans": [
{
"start": 279,
"end": 288,
"text": "(Table 1)",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Class Imbalance Issue",
"sec_num": "4.2"
},
{
"text": "Usually when the classes in a dataset are unbalanced, it is because one of the following reasons: Its either",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Class Imbalance Issue",
"sec_num": "4.2"
},
{
"text": "\u2022 the data is rarely occurring (more specifically the class of interest is rare compared to the other class(es))",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Class Imbalance Issue",
"sec_num": "4.2"
},
{
"text": "\u2022 or the data collection and labelling is difficult, time consuming and expensive;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Class Imbalance Issue",
"sec_num": "4.2"
},
{
"text": "\u2022 or the overlap between the classes is high.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Class Imbalance Issue",
"sec_num": "4.2"
},
{
"text": "For the hate speech detection task, it is all of the above. It becomes increasingly difficult to train ML algorithms on such small samples, which leads to subpar performance. The class imbalance problem is probably inevitable when collecting data, as there is an estimated maximum of 3% derogatory tweets on Twitter (Founta et al., 2018) . Thus, the open question of whether to work with the dataset in its unbalanced form or to look into methods to make it balanced remains unanswered. It is desirable to develop a model that does a good job in identifying hateful instances even with the small sample size. Certainly, such a model will perform well in real life scenarios during deployment. Therefore a naturally occurring question is; Are the methods for learning with a small data size more easily accessible and less computationally expensive than methods for reducing the class imbalance? It is worthwhile to look into both and compare. Several studies Founta et al., 2019; Madukwe and Gao, 2019; Mozafari et al., 2020; have used the datasets in its unbalanced form with the claim that since this is the naturally occurring state, it shouldn't be altered. However, we argue that this is not advantageous to existing supervised ML algorithms that depend on a large supply of data with balanced classes for optimum performance. Similarly, (Swamy et al., 2019) showed that models generalize better when trained on data containing a high amount of samples in the positive class which also unfortunately the minority class in most datasets.",
"cite_spans": [
{
"start": 316,
"end": 337,
"text": "(Founta et al., 2018)",
"ref_id": "BIBREF19"
},
{
"start": 959,
"end": 979,
"text": "Founta et al., 2019;",
"ref_id": "BIBREF18"
},
{
"start": 980,
"end": 1002,
"text": "Madukwe and Gao, 2019;",
"ref_id": "BIBREF28"
},
{
"start": 1003,
"end": 1025,
"text": "Mozafari et al., 2020;",
"ref_id": "BIBREF32"
},
{
"start": 1343,
"end": 1363,
"text": "(Swamy et al., 2019)",
"ref_id": "BIBREF42"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Class Imbalance Issue",
"sec_num": "4.2"
},
{
"text": "Since the collection and annotation of data for this task is time-consuming, expensive, error-prone with low yield, we recommend more studies into the best way to augment existing data. This would assist in increasing the data size and inadvertently solving the class imbalance problem. A few studies have discussed and proposed solution for augmenting related datasets (Chung et al., 2019; Karatsalos and Panagiotakis, 2020; Sharifirad et al., 2018) . However, employing data augmentation as a preprocessing step to cater to the class imbalance problem will lead to an unfair comparison amongst other proposed solutions as there are wide of augmentation techniques. Also, data augmentation methods such as oversampling the minority class not done right (Agrawal and Awekar, 2018) , will introduce bias into the model (Arango et al., 2019) . Another suggestion is to look into ML methods that are unaffected by the class size such as one-class and active learning. Rigorous investigations are required to answer the question of how to handle class imbalance in hate speech datasets.",
"cite_spans": [
{
"start": 370,
"end": 390,
"text": "(Chung et al., 2019;",
"ref_id": "BIBREF8"
},
{
"start": 391,
"end": 425,
"text": "Karatsalos and Panagiotakis, 2020;",
"ref_id": "BIBREF22"
},
{
"start": 426,
"end": 450,
"text": "Sharifirad et al., 2018)",
"ref_id": "BIBREF41"
},
{
"start": 754,
"end": 780,
"text": "(Agrawal and Awekar, 2018)",
"ref_id": "BIBREF0"
},
{
"start": 818,
"end": 839,
"text": "(Arango et al., 2019)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Class Imbalance Issue",
"sec_num": "4.2"
},
{
"text": "It is known that there are varying definitions of hate speech, however there are some consistencies amongst them. (Fortuna and Nunes, 2018) have analysed some available definitions of hate speech and highlighted the major similarities amongst them. Specifically, hate speech:",
"cite_spans": [
{
"start": 114,
"end": 139,
"text": "(Fortuna and Nunes, 2018)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Varying Definitions and How it Affects Annotation",
"sec_num": "4.3"
},
{
"text": "\u2022 has a specific target.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Varying Definitions and How it Affects Annotation",
"sec_num": "4.3"
},
{
"text": "\u2022 incites violence or hate.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Varying Definitions and How it Affects Annotation",
"sec_num": "4.3"
},
{
"text": "\u2022 attacks or diminishes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Varying Definitions and How it Affects Annotation",
"sec_num": "4.3"
},
{
"text": "\u2022 can contain humor or sarcasm.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Varying Definitions and How it Affects Annotation",
"sec_num": "4.3"
},
{
"text": "Varying definitions imply that, of course, it might be impossible to rid social media platforms completely of hateful instances. Despite this fact, the agreed upon similarities is a good place to start. Currently, existing datasets are affected by these variations because the annotations are powered by the definitions. Thus, similar instances can fall under different annotation categories. (Ross et al., 2017) investigated the effects of the presence and absence of a definition during annotation on the annotation reliability of a hate speech dataset. They conclude that hate speech requires a stronger definition. Similarly, (Fortuna et al., 2020) empirically find that most of the publicly available datasets are incompatible due to different definitions assigned to similar concepts.",
"cite_spans": [
{
"start": 393,
"end": 412,
"text": "(Ross et al., 2017)",
"ref_id": "BIBREF39"
},
{
"start": 630,
"end": 652,
"text": "(Fortuna et al., 2020)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Varying Definitions and How it Affects Annotation",
"sec_num": "4.3"
},
{
"text": "In order to measure the annotation reliability of the labels in a dataset, a numerical index known as the Inter-Rater/Inter-Coder/Inter-Annotator Agreement (Artstein and Poesio, 2008 ) is usually adopted. The studies that collected data, use it to measure the level of agreement among their annotators on the labels they chose for each text or sentence. Examples of this score are Fleiss (Fleiss, 1971) or Cohens (Cohen, 1960) Kappa. This score is affected by annotator bias and imbalance in the classes making it unreliable. In addition, different studies suggest different thresholds for acceptable annotation (Di Eugenio and Glass, 2004; Artstein and Poesio, 2008) . As can be seen from the datasets highlighted in Section 3, the annotation reliability is relatively low. In (Awal et al., 2020) , the authors propose a framework to analyse the annotation inconsistency in the WASEEM, DAVIDSON and FOUNTA dataset. They found major inconsistencies in the labels of all the three dataset most especially in FOUNTA dataset where duplicate tweets exists in great number and the exact same tweet can have opposing labels. ML models built on this data will find it difficult to learn anything useful. Additionally, using different names for the same concept can be misleading. (Waseem et al., 2017) examined the relationship between abusive language, hate speech, cyberbullying and trolling. A lax use of typology affects annotation. For example in (Wiegand et al., 2019) they conflated the racism and sexism class in WASEEM data into one class and changed the labels to Abuse and No Abuse.",
"cite_spans": [
{
"start": 156,
"end": 182,
"text": "(Artstein and Poesio, 2008",
"ref_id": "BIBREF2"
},
{
"start": 388,
"end": 402,
"text": "(Fleiss, 1971)",
"ref_id": "BIBREF15"
},
{
"start": 413,
"end": 426,
"text": "(Cohen, 1960)",
"ref_id": "BIBREF9"
},
{
"start": 628,
"end": 640,
"text": "Glass, 2004;",
"ref_id": "BIBREF13"
},
{
"start": 641,
"end": 667,
"text": "Artstein and Poesio, 2008)",
"ref_id": "BIBREF2"
},
{
"start": 778,
"end": 797,
"text": "(Awal et al., 2020)",
"ref_id": "BIBREF3"
},
{
"start": 1273,
"end": 1294,
"text": "(Waseem et al., 2017)",
"ref_id": "BIBREF49"
},
{
"start": 1445,
"end": 1467,
"text": "(Wiegand et al., 2019)",
"ref_id": "BIBREF53"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Varying Definitions and How it Affects Annotation",
"sec_num": "4.3"
},
{
"text": "Hate speech datasets sometimes have very similar labels and some studies merge some of them together into one class, often as a way to combat the level of class imbalance. However, this conflation could negatively affect research progress as distinction between them is very necessary. One example is the DAVIDSON data with the Offensive and Hate class or the WASEEM data with the Racist and Sexist class. Classes in the DAVIDSON data were conflated in (Zhang and Luo, 2018; where they merged the Hate and Offensive class into one class while (Miok et al., 2019) conflated the Offensive and Neither class into a Non-hate class. (Watanabe et al., 2018; Wiegand et al., 2019) conflated classes in the WASEEM data and for the FOUNTA data, (Davidson and Bhattacharya, 2020) deleted the Spam class and conflated the Hate and Abusive Class into Abusive.",
"cite_spans": [
{
"start": 453,
"end": 474,
"text": "(Zhang and Luo, 2018;",
"ref_id": "BIBREF55"
},
{
"start": 543,
"end": 562,
"text": "(Miok et al., 2019)",
"ref_id": "BIBREF31"
},
{
"start": 628,
"end": 651,
"text": "(Watanabe et al., 2018;",
"ref_id": "BIBREF52"
},
{
"start": 652,
"end": 673,
"text": "Wiegand et al., 2019)",
"ref_id": "BIBREF53"
},
{
"start": 736,
"end": 769,
"text": "(Davidson and Bhattacharya, 2020)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conflating Classes/Labels",
"sec_num": "4.4"
},
{
"text": "These last two sections (4.3 and 4.4) affect the typology used in this research. There aren't any enforced or strict demarcations, therefore the use of varying terms to mean one thing negatively affects research progress. An author searching for hate speech data or studies, might miss out on ones that used abusive language or toxic comment as an umbrella term encompassing several paradigms. We suggest that the terms be used strictly following the available definitions. Similar to suggestion in , offensive language is not the same as hate speech and should not be merged. Also, abusive language and cyberbullying should not be merged with hate speech.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conflating Classes/Labels",
"sec_num": "4.4"
},
{
"text": "Train-Test Splits Social media data is often very noisy since it is a user-generated data. Different researchers have employed varying steps to clean the data in preparation for an ML algorithm. We show that these choice of steps can affect the data size, therefore obstructing an objective comparison between studies even more. Table 2 shows a few papers using three commonly used hate speech datasets and the preprocessing applied which leads to variations that negatively affect a fair comparison. Some of the existing studies select different train-test splits such as 70:30 or 80:20, some do a train-test-validation split of 70:15:15 or 60:20:20 or 80:10:10 while some do a 10-fold or 5-fold cross validation. This varying setting means that fair comparison amongst studies is not possible except if every researcher reruns all existing studies they wish to compare with. This is both impractical and costly. Here, we highlight factors that qualifies a dataset to be considered as benchmark.",
"cite_spans": [],
"ref_spans": [
{
"start": 329,
"end": 336,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Varying Preprocessing Steps and",
"sec_num": "4.5"
},
{
"text": "\u2022 A publicly available dataset: The dataset should be considerably easy to access by potential researchers. This will increase the chances of these researchers to use the dataset to measure the performance of their proposed methods.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Varying Preprocessing Steps and",
"sec_num": "4.5"
},
{
"text": "\u2022 Consistent Train-Test-Validation Split: Likewise, this will contribute to fairer comparison between studies.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Varying Preprocessing Steps and",
"sec_num": "4.5"
},
{
"text": "\u2022 Accessible data format: The data should preferably be in a format that does not degrade or change in time. Therefore the exact same dataset is available to Researcher A now and Researcher Z later.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Varying Preprocessing Steps and",
"sec_num": "4.5"
},
{
"text": "\u2022 Absence of bias: A benchmark data lacks (for the most part) bias. A benchmark dataset for hate speech detection needs to be devoid of racial (Davidson et al., 2019; Sap et al., 2019 ), gender (Park et al., 2018 or intersectional (Kim et al., 2020) biases. Bias introduced by the data collection process was discussed in (Wiegand et al., 2019) . Likewise, (Waseem et al., 2018) noted that more than 2k tweets in the DAVIDSON dataset, written in African American Vernacular English were labeled as hateful or offensive simply because they used the n-word. A diverse group of annotators would have significantly reduced this bias.",
"cite_spans": [
{
"start": 143,
"end": 166,
"text": "(Davidson et al., 2019;",
"ref_id": "BIBREF11"
},
{
"start": 167,
"end": 183,
"text": "Sap et al., 2019",
"ref_id": "BIBREF40"
},
{
"start": 184,
"end": 212,
"text": "), gender (Park et al., 2018",
"ref_id": null
},
{
"start": 231,
"end": 249,
"text": "(Kim et al., 2020)",
"ref_id": null
},
{
"start": 322,
"end": 344,
"text": "(Wiegand et al., 2019)",
"ref_id": "BIBREF53"
},
{
"start": 357,
"end": 378,
"text": "(Waseem et al., 2018)",
"ref_id": "BIBREF51"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Varying Preprocessing Steps and",
"sec_num": "4.5"
},
{
"text": "In (Arango et al., 2019) , they showed that a bias in user distribution adversely affected the generalization ability of the proposed models. Therefore, it is important that benchmark dataset are not biased towards particular users and that information on the distribution of the users whose tweets make up the dataset are provided in an anonymized format. (Davidson and Bhattacharya, 2020) reported that in the FOUNTA dataset, there are several duplicated tweets which can introduce a strong bias in the model as some instances are contained in both the training and testing sets.",
"cite_spans": [
{
"start": 3,
"end": 24,
"text": "(Arango et al., 2019)",
"ref_id": "BIBREF1"
},
{
"start": 357,
"end": 390,
"text": "(Davidson and Bhattacharya, 2020)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Varying Preprocessing Steps and",
"sec_num": "4.5"
},
{
"text": "\u2022 A common evaluation method/metric: Different studies use different metrics which affect comparison without re-implementation which might not be feasible if the said method is expensive to re-implement. Also, some metric choices do not reflect the true performance of the proposed methods. (Olteanu et al., 2017) argues for evaluation metrics that are directly proportional to user perception of correctness, thus more humancentered.",
"cite_spans": [
{
"start": 291,
"end": 313,
"text": "(Olteanu et al., 2017)",
"ref_id": "BIBREF35"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Varying Preprocessing Steps and",
"sec_num": "4.5"
},
{
"text": "\u2022 It should be preferably pre-processed to an extent. If this is not feasible, then the authors should endeavor to make their pre-processing code public so that other researchers can apply it to keep the resulting dataset consistent and uniform. Table 3 highlights the existing publicly available datasets and the benchmark criteria they fulfil. From this summary, it is clear that there currently exits no benchmark hate speech detection dataset.",
"cite_spans": [],
"ref_spans": [
{
"start": 246,
"end": 253,
"text": "Table 3",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Varying Preprocessing Steps and",
"sec_num": "4.5"
},
{
"text": "First, we want to encourage researchers to put in more efforts towards a less biased, benchmark dataset taking the prior discussed factors into consideration. Second, we also implore social media platforms to make the access easier for researchers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussions and Implications for future research",
"sec_num": "5"
},
{
"text": "Collaboration with these platforms is also another way to ensure better data sharing. Twitter has been known to release datasets for research purposes 18 (Vidgen et al., 2019) .",
"cite_spans": [
{
"start": 154,
"end": 175,
"text": "(Vidgen et al., 2019)",
"ref_id": "BIBREF45"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussions and Implications for future research",
"sec_num": "5"
},
{
"text": "We suggest that all datasets are anonymized before release because some of the username left in the dataset have ended up in research publications; which is a glaring ethical breach. Although some studies have extracted user information as a feature, we argue that it constitutes some ethical concerns and should be avoided. For a more in-depth survey on the issues surrounding social data bias see (Olteanu et al., 2019) .",
"cite_spans": [
{
"start": 399,
"end": 421,
"text": "(Olteanu et al., 2019)",
"ref_id": "BIBREF34"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussions and Implications for future research",
"sec_num": "5"
},
{
"text": "Also, we purport that the specific terms be used to avoid confusions and conflations of ideas. Even better, a clear definition should be provided on what the researcher defines a term as, e.g. what is offensive, abusive, or hate speech for the researcher. Unnecessary conflations dampens the research efforts. Moreover, a clear demarcation should be made for for proposed methods to solve hate speech, abusive language and cyberbullying detection. Their characteristics differ and proposed solutions might not generalize.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussions and Implications for future research",
"sec_num": "5"
},
{
"text": "Finally, making codes public is always in best interest of the research community and when that is not possible, the hyperparameter choices and other necessary settings should be reported to support replicability of research work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussions and Implications for future research",
"sec_num": "5"
},
{
"text": "This work assisted in understanding the limitations of existing hate speech data for future research and the way forward. The contributions of this work include:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "\u2022 Recommendation on a better approach to make datasets publicly available in the future.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "\u2022 Requirements for any future researcher/organization interested in collecting and labelling data -Persistently publicly available -Consistent train-test split -Less bias -Lack of data degradation -Common evaluation metric -Basic pre-processing These suggestions can be easily applied to other NLP applications apart from hate speech detection that require real-world datasets. We acknowledge the fact that an unbiased dataset does not exist, however, there are steps to be taken to make them less biased. Finally, even though we might have highlighted limitations in datasets and approaches, it is not meant as a negative criticism of the authors or their work. We acknowledge that their individual and collective efforts have brought us so far in this research area.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "https://github.com/t-davidson/hate-s peech-and-offensive-language 3 https://hatebase.org/ 4 https://www.figure-eight.com/ 5 https://dataverse.mpi-sws.org/dataset. xhtml?persistentId=doi:10.5072/FK2/ZDTE MN",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/ziqizhang/data#hat e 7 https://github.com/jing-qian/A-Bench mark-Dataset-for-Learning-to-Intervene-i n-Online-Hate-Speech",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://competitions.codalab.org/com petitions/19935#phases 9 UCSM-DUE/IWG hatespeech public 10 github.com/MeDarina/HateSpeechImplic it 11 https://github.com/msang/haspeede 12 \"http://www.evalita.it/2020\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/anniethorburn/Hat e-Speech-M",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://dataverse.org/ 17 https://www.icpsr.umich.edu/web/page s/index.html",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://www.wired.com/story/twitters -disinformation-data-dumps-helpful/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "Acknowledgements: The authors are grateful for the insightful comments from the reviewers that helped improve this work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "acknowledgement",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Deep learning for detecting cyberbullying across multiple social media platforms",
"authors": [
{
"first": "Sweta",
"middle": [],
"last": "Agrawal",
"suffix": ""
},
{
"first": "Amit",
"middle": [],
"last": "Awekar",
"suffix": ""
}
],
"year": 2018,
"venue": "Advances in Information Retrieval",
"volume": "",
"issue": "",
"pages": "141--153",
"other_ids": {
"DOI": [
"https://link.springer.com/chapter/10.1007/978-3-319-76941-7_11"
]
},
"num": null,
"urls": [],
"raw_text": "Sweta Agrawal and Amit Awekar. 2018. Deep learn- ing for detecting cyberbullying across multiple so- cial media platforms. In Advances in Information Retrieval, pages 141-153, Cham. Springer Interna- tional Publishing.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Hate speech detection is not as easy as you may think: A closer look at model validation",
"authors": [
{
"first": "Aym\u00e9",
"middle": [],
"last": "Arango",
"suffix": ""
},
{
"first": "Jorge",
"middle": [],
"last": "P\u00e9rez",
"suffix": ""
},
{
"first": "Barbara",
"middle": [],
"last": "Poblete",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR'19",
"volume": "",
"issue": "",
"pages": "45--54",
"other_ids": {
"DOI": [
"10.1145/3331184.3331262"
]
},
"num": null,
"urls": [],
"raw_text": "Aym\u00e9 Arango, Jorge P\u00e9rez, and Barbara Poblete. 2019. Hate speech detection is not as easy as you may think: A closer look at model validation. In Proceed- ings of the 42nd International ACM SIGIR Confer- ence on Research and Development in Information Retrieval, SIGIR'19, page 45-54, NY, USA. ACM.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Inter-coder agreement for computational linguistics",
"authors": [
{
"first": "Ron",
"middle": [],
"last": "Artstein",
"suffix": ""
},
{
"first": "Massimo",
"middle": [],
"last": "Poesio",
"suffix": ""
}
],
"year": 2008,
"venue": "Comput. Linguist",
"volume": "34",
"issue": "4",
"pages": "555--596",
"other_ids": {
"DOI": [
"10.1162/coli.07-034-R2"
]
},
"num": null,
"urls": [],
"raw_text": "Ron Artstein and Massimo Poesio. 2008. Inter-coder agreement for computational linguistics. Comput. Linguist., 34(4):555-596.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "On analyzing annotation consistency in online abusive behavior datasets",
"authors": [
{
"first": "Rui",
"middle": [],
"last": "Md Rabiul Awal",
"suffix": ""
},
{
"first": "Roy Ka-Wei",
"middle": [],
"last": "Cao",
"suffix": ""
},
{
"first": "Sandra",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mitrovi\u0107",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 14th International AAAI Conference on Web and Social Media",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Md Rabiul Awal, Rui Cao, Roy Ka-Wei Lee, and San- dra Mitrovi\u0107. 2020. On analyzing annotation con- sistency in online abusive behavior datasets. In Pro- ceedings of the 14th International AAAI Conference on Web and Social Media.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Deep Learning for Hate Speech Detection in Tweets",
"authors": [
{
"first": "Pinkesh",
"middle": [],
"last": "Badjatiya",
"suffix": ""
},
{
"first": "Shashank",
"middle": [],
"last": "Gupta",
"suffix": ""
},
{
"first": "Manish",
"middle": [],
"last": "Gupta",
"suffix": ""
},
{
"first": "Vasudeva",
"middle": [],
"last": "Varma",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 26th International Conference on World Wide Web Companion -WWW '17 Companion",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1145/3041021.3054223"
]
},
"num": null,
"urls": [],
"raw_text": "Pinkesh Badjatiya, Shashank Gupta, Manish Gupta, and Vasudeva Varma. 2017. Deep Learning for Hate Speech Detection in Tweets. Proceedings of the 26th International Conference on World Wide Web Companion -WWW '17 Companion.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "What does this imply? examining the impact of implicitness on the perception of hate speech",
"authors": [
{
"first": "Darina",
"middle": [],
"last": "Benikova",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Wojatzki",
"suffix": ""
},
{
"first": "Torsten",
"middle": [],
"last": "Zesch",
"suffix": ""
}
],
"year": 2018,
"venue": "Lecture Notes in Computer Science",
"volume": "",
"issue": "",
"pages": "171--179",
"other_ids": {
"DOI": [
"10.1007/978-3-319-73706-5_14"
]
},
"num": null,
"urls": [],
"raw_text": "Darina Benikova, Michael Wojatzki, and Torsten Zesch. 2018. What does this imply? examining the impact of implicitness on the perception of hate speech. Lecture Notes in Computer Science, 10713 LNAI:171-179.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Us and them: identifying cyber hate on Twitter across multiple protected characteristics",
"authors": [
{
"first": "Pete",
"middle": [],
"last": "Burnap",
"suffix": ""
},
{
"first": "Matthew",
"middle": [
"L"
],
"last": "Williams",
"suffix": ""
}
],
"year": 2016,
"venue": "EPJ Data Science",
"volume": "5",
"issue": "1",
"pages": "",
"other_ids": {
"DOI": [
"10.1140/epjds/s13688-016-0072-6"
]
},
"num": null,
"urls": [],
"raw_text": "Pete Burnap and Matthew L. Williams. 2016. Us and them: identifying cyber hate on Twitter across mul- tiple protected characteristics. EPJ Data Science, 5(1).",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "An empirical comparison of supervised learning algorithms",
"authors": [
{
"first": "Rich",
"middle": [],
"last": "Caruana",
"suffix": ""
},
{
"first": "Alexandru",
"middle": [],
"last": "Niculescu-Mizil",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the 23rd International Conference on Machine Learning, ICML '06",
"volume": "",
"issue": "",
"pages": "161--168",
"other_ids": {
"DOI": [
"10.1145/1143844.1143865"
]
},
"num": null,
"urls": [],
"raw_text": "Rich Caruana and Alexandru Niculescu-Mizil. 2006. An empirical comparison of supervised learning al- gorithms. In Proceedings of the 23rd International Conference on Machine Learning, ICML '06, page 161-168, NY, USA. Association for Computing Ma- chinery.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "CONAN -COunter NArratives through Nichesourcing: a Multilingual Dataset of Responses to Fight Online Hate Speech",
"authors": [
{
"first": "Yi-Ling",
"middle": [],
"last": "Chung",
"suffix": ""
},
{
"first": "Elizaveta",
"middle": [],
"last": "Kuzmenko",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Serra Sinem Tekiroglu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Guerini",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "2819--2829",
"other_ids": {
"DOI": [
"10.18653/v1/p19-1271"
]
},
"num": null,
"urls": [],
"raw_text": "Yi-Ling Chung, Elizaveta Kuzmenko, Serra Sinem Tekiroglu, and Marco Guerini. 2019. CONAN - COunter NArratives through Nichesourcing: a Mul- tilingual Dataset of Responses to Fight Online Hate Speech. pages 2819-2829.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "A coefficient of agreement for nominal scales",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Cohen",
"suffix": ""
}
],
"year": 1960,
"venue": "Educational and Psychological Measurement",
"volume": "20",
"issue": "1",
"pages": "37--46",
"other_ids": {
"DOI": [
"10.1177/001316446002000104"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Cohen. 1960. A coefficient of agreement for nominal scales. Educational and Psychological Measurement, 20(1):37-46.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Examining racial bias in an online abuse corpus with structural topic modeling",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Davidson",
"suffix": ""
},
{
"first": "Debasmita",
"middle": [],
"last": "Bhattacharya",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 14th International AAAI Conference on Web and Social Media",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas Davidson and Debasmita Bhattacharya. 2020. Examining racial bias in an online abuse corpus with structural topic modeling. In Proceedings of the 14th International AAAI Conference on Web and So- cial Media.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Racial Bias in Hate Speech and Abusive Language Detection Datasets",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Davidson",
"suffix": ""
},
{
"first": "Debasmita",
"middle": [],
"last": "Bhattacharya",
"suffix": ""
},
{
"first": "Ingmar",
"middle": [],
"last": "Weber",
"suffix": ""
}
],
"year": 2019,
"venue": "Third Abusive Language Workshop, Annual Meeting for the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas Davidson, Debasmita Bhattacharya, and Ing- mar Weber. 2019. Racial Bias in Hate Speech and Abusive Language Detection Datasets. In Third Abu- sive Language Workshop, Annual Meeting for the As- sociation for Computational Linguistics 2019.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Automated Hate Speech Detection and the Problem of Offensive Language",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Davidson",
"suffix": ""
},
{
"first": "Dana",
"middle": [],
"last": "Warmsley",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Macy",
"suffix": ""
},
{
"first": "Ingmar",
"middle": [],
"last": "Weber",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 11th International AAAI Conference on Web and Social Media, ICWSM '17",
"volume": "",
"issue": "",
"pages": "512--515",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas Davidson, Dana Warmsley, Michael Macy, and Ingmar Weber. 2017. Automated Hate Speech Detection and the Problem of Offensive Language. In Proceedings of the 11th International AAAI Con- ference on Web and Social Media, ICWSM '17, pages 512-515.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "The kappa statistic: A second look",
"authors": [
{
"first": "Barbara",
"middle": [
"Di"
],
"last": "",
"suffix": ""
},
{
"first": "Eugenio",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Glass",
"suffix": ""
}
],
"year": 2004,
"venue": "Comput. Linguist",
"volume": "30",
"issue": "1",
"pages": "95--101",
"other_ids": {
"DOI": [
"10.1162/089120104773633402"
]
},
"num": null,
"urls": [],
"raw_text": "Barbara Di Eugenio and Michael Glass. 2004. The kappa statistic: A second look. Comput. Linguist., 30(1):95-101.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Hate Speech Detection with Comment Embeddings",
"authors": [
{
"first": "Nemanja",
"middle": [],
"last": "Djuric",
"suffix": ""
},
{
"first": "Jing",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Robin",
"middle": [],
"last": "Morris",
"suffix": ""
},
{
"first": "Mihajlo",
"middle": [],
"last": "Grbovic",
"suffix": ""
},
{
"first": "Vladan",
"middle": [],
"last": "Radosavljevic",
"suffix": ""
},
{
"first": "Narayan",
"middle": [],
"last": "Bhamidipati",
"suffix": ""
}
],
"year": 2015,
"venue": "Proc. 24th Int. Conf. World Wide Web",
"volume": "",
"issue": "",
"pages": "29--30",
"other_ids": {
"DOI": [
"10.1145/2740908.2742760"
]
},
"num": null,
"urls": [],
"raw_text": "Nemanja Djuric, Jing Zhou, Robin Morris, Mihajlo Gr- bovic, Vladan Radosavljevic, and Narayan Bhamidi- pati. 2015. Hate Speech Detection with Comment Embeddings. In Proc. 24th Int. Conf. World Wide Web, pages 29-30.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Measuring nominal scale agreement among many raters",
"authors": [
{
"first": "",
"middle": [],
"last": "Jl Fleiss",
"suffix": ""
}
],
"year": 1971,
"venue": "Psychological bulletin",
"volume": "76",
"issue": "5",
"pages": "378--382",
"other_ids": {
"DOI": [
"10.1037/h0031619"
]
},
"num": null,
"urls": [],
"raw_text": "JL Fleiss. 1971. Measuring nominal scale agree- ment among many raters. Psychological bulletin, 76(5):378-382.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "A Survey on Automatic Detection of Hate Speech in Text",
"authors": [
{
"first": "Paula",
"middle": [],
"last": "Fortuna",
"suffix": ""
},
{
"first": "S\u00e9rgio",
"middle": [],
"last": "Nunes",
"suffix": ""
}
],
"year": 2018,
"venue": "ACM Computing Surveys",
"volume": "51",
"issue": "4",
"pages": "1--30",
"other_ids": {
"DOI": [
"10.1145/3232676"
]
},
"num": null,
"urls": [],
"raw_text": "Paula Fortuna and S\u00e9rgio Nunes. 2018. A Survey on Automatic Detection of Hate Speech in Text. ACM Computing Surveys, 51(4):1-30.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Toxic, hateful, offensive or abusive? what are we really classifying? an empirical analysis of hate speech datasets",
"authors": [
{
"first": "Paula",
"middle": [],
"last": "Fortuna",
"suffix": ""
},
{
"first": "Juan",
"middle": [],
"last": "Soler",
"suffix": ""
},
{
"first": "Leo",
"middle": [],
"last": "Wanner",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 12th Language Resources and Evaluation Conference",
"volume": "",
"issue": "",
"pages": "6786--6794",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Paula Fortuna, Juan Soler, and Leo Wanner. 2020. Toxic, hateful, offensive or abusive? what are we really classifying? an empirical analysis of hate speech datasets. In Proceedings of the 12th Lan- guage Resources and Evaluation Conference, pages 6786-6794, Marseille, France. European Language Resources Association.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "A unified deep learning architecture for abuse detection",
"authors": [
{
"first": "Despoina",
"middle": [],
"last": "Antigoni Maria Founta",
"suffix": ""
},
{
"first": "Nicolas",
"middle": [],
"last": "Chatzakou",
"suffix": ""
},
{
"first": "Jeremy",
"middle": [],
"last": "Kourtellis",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Blackburn",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 10th ACM Conference on Web Science, WebSci '19",
"volume": "",
"issue": "",
"pages": "105--114",
"other_ids": {
"DOI": [
"10.1145/3292522.3326028"
]
},
"num": null,
"urls": [],
"raw_text": "Antigoni Maria Founta, Despoina Chatzakou, Nicolas Kourtellis, Jeremy Blackburn, Athena Vakali, and Il- ias Leontiadis. 2019. A unified deep learning archi- tecture for abuse detection. In Proceedings of the 10th ACM Conference on Web Science, WebSci '19, page 105-114, New York, NY, USA. Association for Computing Machinery.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Large Scale Crowdsourcing and Characterization of Twitter Abusive Behavior",
"authors": [
{
"first": "Antigoni-Maria",
"middle": [],
"last": "Founta",
"suffix": ""
},
{
"first": "Constantinos",
"middle": [],
"last": "Djouvas",
"suffix": ""
},
{
"first": "Despoina",
"middle": [],
"last": "Chatzakou",
"suffix": ""
},
{
"first": "Ilias",
"middle": [],
"last": "Leontiadis",
"suffix": ""
},
{
"first": "Jeremy",
"middle": [],
"last": "Blackburn",
"suffix": ""
},
{
"first": "Gianluca",
"middle": [],
"last": "Stringhini",
"suffix": ""
},
{
"first": "Athena",
"middle": [],
"last": "Vakali",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Sirivianos",
"suffix": ""
},
{
"first": "Nicolas",
"middle": [],
"last": "Kourtellis",
"suffix": ""
}
],
"year": 2018,
"venue": "AAAI International Conference on Web and Social Media (ICWSM)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Antigoni-Maria Founta, Constantinos Djouvas, De- spoina Chatzakou, Ilias Leontiadis, Jeremy Black- burn, Gianluca Stringhini, Athena Vakali, Michael Sirivianos, and Nicolas Kourtellis. 2018. Large Scale Crowdsourcing and Characterization of Twit- ter Abusive Behavior. In AAAI International Con- ference on Web and Social Media (ICWSM).",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Recognizing explicit and implicit hate speech using a weakly supervised two-path bootstrapping approach",
"authors": [
{
"first": "Lei",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Alexis",
"middle": [],
"last": "Kuppersmith",
"suffix": ""
},
{
"first": "Ruihong",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the Eighth International Joint Conference on Natural Language Processing",
"volume": "1",
"issue": "",
"pages": "774--782",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lei Gao, Alexis Kuppersmith, and Ruihong Huang. 2017. Recognizing explicit and implicit hate speech using a weakly supervised two-path bootstrapping approach. In Proceedings of the Eighth Interna- tional Joint Conference on Natural Language Pro- cessing (Volume 1: Long Papers), pages 774-782, Taipei, Taiwan. Asian Federation of Natural Lan- guage Processing.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Exploring hate speech detection in multimodal publications",
"authors": [
{
"first": "Raul",
"middle": [],
"last": "Gomez",
"suffix": ""
},
{
"first": "Jaume",
"middle": [],
"last": "Gibert",
"suffix": ""
},
{
"first": "Lluis",
"middle": [],
"last": "Gomez",
"suffix": ""
},
{
"first": "Dimosthenis",
"middle": [],
"last": "Karatzas",
"suffix": ""
}
],
"year": 2019,
"venue": "2020 IEEE Winter Conference on Applications of Computer Vision (WACV)",
"volume": "",
"issue": "",
"pages": "1459--1467",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Raul Gomez, Jaume Gibert, Lluis Gomez, and Dimos- thenis Karatzas. 2019. Exploring hate speech detec- tion in multimodal publications. In 2020 IEEE Win- ter Conference on Applications of Computer Vision (WACV), pages 1459-1467.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Attention-based method for categorizing different types of online harassment language. Communications in Computer and Information Science",
"authors": [
{
"first": "Christos",
"middle": [],
"last": "Karatsalos",
"suffix": ""
},
{
"first": "Yannis",
"middle": [],
"last": "Panagiotakis",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "321--330",
"other_ids": {
"DOI": [
"10.1007/978-3-030-43887-6_26"
]
},
"num": null,
"urls": [],
"raw_text": "Christos Karatsalos and Yannis Panagiotakis. 2020. Attention-based method for categorizing different types of online harassment language. Communica- tions in Computer and Information Science, page 321-330.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Amanpreet Singh, Pratik Ringshia, and Davide Testuggine. 2020. The hateful memes challenge: Detecting hate speech in multimodal memes",
"authors": [
{
"first": "Douwe",
"middle": [],
"last": "Kiela",
"suffix": ""
},
{
"first": "Hamed",
"middle": [],
"last": "Firooz",
"suffix": ""
},
{
"first": "Aravind",
"middle": [],
"last": "Mohan",
"suffix": ""
},
{
"first": "Vedanuj",
"middle": [],
"last": "Goswami",
"suffix": ""
}
],
"year": null,
"venue": "ArXiv Preprint",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Douwe Kiela, Hamed Firooz, Aravind Mohan, Vedanuj Goswami, Amanpreet Singh, Pratik Ringshia, and Davide Testuggine. 2020. The hateful memes chal- lenge: Detecting hate speech in multimodal memes. In ArXiv Preprint.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Sarah Santiago, and Vivek Datta. 2020. Intersectional bias in hate speech and abusive language datasets",
"authors": [
{
"first": "Jae Yeon",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Carlos",
"middle": [],
"last": "Ortiz",
"suffix": ""
},
{
"first": "Sarah",
"middle": [],
"last": "Nam",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jae Yeon Kim, Carlos Ortiz, Sarah Nam, Sarah Santi- ago, and Vivek Datta. 2020. Intersectional bias in hate speech and abusive language datasets. In ArX- ivPreprint.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Learning multiple layers of features from tiny images",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "Krizhevsky",
"suffix": ""
}
],
"year": 2009,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alex Krizhevsky. 2009. Learning multiple layers of features from tiny images. Technical report.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "MNIST handwritten digit database",
"authors": [
{
"first": "Yann",
"middle": [],
"last": "Lecun",
"suffix": ""
},
{
"first": "Corinna",
"middle": [],
"last": "Cortes",
"suffix": ""
}
],
"year": 2010,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yann LeCun and Corinna Cortes. 2010. MNIST hand- written digit database.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Enhanced offensive language detection through data augmentation",
"authors": [
{
"first": "Ruibo",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Guangxuan",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Soroush",
"middle": [],
"last": "Vosoughi",
"suffix": ""
}
],
"year": 2020,
"venue": "ICWSM'20 Safety Data Challenge",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ruibo Liu, Guangxuan Xu, and Soroush Vosoughi. 2020. Enhanced offensive language detection through data augmentation. In ICWSM'20 Safety Data Challenge.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "The Thin Line Between Hate and Profanity",
"authors": [
{
"first": "Judith",
"middle": [],
"last": "Kosisochukwu",
"suffix": ""
},
{
"first": "Xiaoying",
"middle": [],
"last": "Madukwe",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Gao",
"suffix": ""
}
],
"year": 2019,
"venue": "AI 2019: Advances in Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "344--356",
"other_ids": {
"DOI": [
"https://link.springer.com/chapter/10.1007/978-3-030-35288-2_28"
]
},
"num": null,
"urls": [],
"raw_text": "Kosisochukwu Judith Madukwe and Xiaoying Gao. 2019. The Thin Line Between Hate and Profan- ity. In AI 2019: Advances in Artificial Intelligence, pages 344-356, Cham. Springer International Pub- lishing.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Detecting Hate Speech in Social Media",
"authors": [
{
"first": "Shervin",
"middle": [],
"last": "Malmasi",
"suffix": ""
},
{
"first": "Marcos",
"middle": [],
"last": "Zampieri",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of Recent Advances in Natural Language Processing (RANLP)",
"volume": "",
"issue": "",
"pages": "467--472",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shervin Malmasi and Marcos Zampieri. 2017. Detect- ing Hate Speech in Social Media. In Proceedings of Recent Advances in Natural Language Processing (RANLP), pages 467-472, Varna, Bulgaria.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Challenges in discriminating profanity from hate speech",
"authors": [
{
"first": "Shervin",
"middle": [],
"last": "Malmasi",
"suffix": ""
},
{
"first": "Marcos",
"middle": [],
"last": "Zampieri",
"suffix": ""
}
],
"year": 2018,
"venue": "Journal of Experimental and Theoretical Artificial Intelligence",
"volume": "30",
"issue": "2",
"pages": "187--202",
"other_ids": {
"DOI": [
"10.1080/0952813X.2017.1409284"
]
},
"num": null,
"urls": [],
"raw_text": "Shervin Malmasi and Marcos Zampieri. 2018. Chal- lenges in discriminating profanity from hate speech. Journal of Experimental and Theoretical Artificial Intelligence, 30(2):187-202.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Prediction uncertainty estimation for hate speech classification",
"authors": [
{
"first": "Kristian",
"middle": [],
"last": "Miok",
"suffix": ""
},
{
"first": "Dong",
"middle": [],
"last": "Nguyen-Doan",
"suffix": ""
},
{
"first": "Daniela",
"middle": [],
"last": "Bla\u017e\u0161krlj",
"suffix": ""
},
{
"first": "Marko",
"middle": [],
"last": "Zaharie",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Robnik-\u0160ikonja",
"suffix": ""
}
],
"year": 2019,
"venue": "Lecture Notes in Computer Science",
"volume": "",
"issue": "",
"pages": "286--298",
"other_ids": {
"DOI": [
"10.1007/978-3-030-31372-2_24"
]
},
"num": null,
"urls": [],
"raw_text": "Kristian Miok, Dong Nguyen-Doan, Bla\u017e\u0160krlj, Daniela Zaharie, and Marko Robnik-\u0160ikonja. 2019. Prediction uncertainty estimation for hate speech classification. Lecture Notes in Computer Science, page 286-298.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "A bert-based transfer learning approach for hate speech detection in online social media",
"authors": [
{
"first": "Marzieh",
"middle": [],
"last": "Mozafari",
"suffix": ""
},
{
"first": "Reza",
"middle": [],
"last": "Farahbakhsh",
"suffix": ""
},
{
"first": "No\u00ebl",
"middle": [],
"last": "Crespi",
"suffix": ""
}
],
"year": 2020,
"venue": "Complex Networks and Their Applications VIII",
"volume": "",
"issue": "",
"pages": "928--940",
"other_ids": {
"DOI": [
"https://link.springer.com/chapter/10.1007/978-3-030-36687-2_77"
]
},
"num": null,
"urls": [],
"raw_text": "Marzieh Mozafari, Reza Farahbakhsh, and No\u00ebl Crespi. 2020. A bert-based transfer learning approach for hate speech detection in online social media. In Complex Networks and Their Applications VIII, pages 928-940, Cham. Springer International Pub- lishing.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Abusive language detection in online user content",
"authors": [
{
"first": "Chikashi",
"middle": [],
"last": "Nobata",
"suffix": ""
},
{
"first": "Joel",
"middle": [],
"last": "Tetreault",
"suffix": ""
},
{
"first": "Achint",
"middle": [],
"last": "Thomas",
"suffix": ""
},
{
"first": "Yashar",
"middle": [],
"last": "Mehdad",
"suffix": ""
},
{
"first": "Yi",
"middle": [],
"last": "Chang",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 25th International Conference on World Wide Web, WWW '16",
"volume": "",
"issue": "",
"pages": "145--153",
"other_ids": {
"DOI": [
"10.1145/2872427.2883062"
]
},
"num": null,
"urls": [],
"raw_text": "Chikashi Nobata, Joel Tetreault, Achint Thomas, Yashar Mehdad, and Yi Chang. 2016. Abusive lan- guage detection in online user content. In Proceed- ings of the 25th International Conference on World Wide Web, WWW '16, page 145-153, Republic and Canton of Geneva, CHE. International World Wide Web Conferences Steering Committee.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Social data: Biases, methodological pitfalls, and ethical boundaries",
"authors": [
{
"first": "Alexandra",
"middle": [],
"last": "Olteanu",
"suffix": ""
},
{
"first": "Carlos",
"middle": [],
"last": "Castillo",
"suffix": ""
},
{
"first": "Fernando",
"middle": [],
"last": "Diaz",
"suffix": ""
},
{
"first": "Emre",
"middle": [],
"last": "K\u0131c\u0131man",
"suffix": ""
}
],
"year": 2019,
"venue": "Frontiers in Big Data",
"volume": "2",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.3389/fdata.2019.00013"
]
},
"num": null,
"urls": [],
"raw_text": "Alexandra Olteanu, Carlos Castillo, Fernando Diaz, and Emre K\u0131c\u0131man. 2019. Social data: Bi- ases, methodological pitfalls, and ethical boundaries. Frontiers in Big Data, 2:13.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "The limits of abstract evaluation metrics: The case of hate speech detection",
"authors": [
{
"first": "Alexandra",
"middle": [],
"last": "Olteanu",
"suffix": ""
},
{
"first": "Kartik",
"middle": [],
"last": "Talamadupula",
"suffix": ""
},
{
"first": "Kush",
"middle": [
"R"
],
"last": "Varshney",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 ACM on Web Science Conference, WebSci '17",
"volume": "",
"issue": "",
"pages": "405--406",
"other_ids": {
"DOI": [
"10.1145/3091478.3098871"
]
},
"num": null,
"urls": [],
"raw_text": "Alexandra Olteanu, Kartik Talamadupula, and Kush R. Varshney. 2017. The limits of abstract evaluation metrics: The case of hate speech detection. In Pro- ceedings of the 2017 ACM on Web Science Confer- ence, WebSci '17, page 405-406, NY, USA. ACM.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Implicit crowdsourcing for identifying abusive behavior in online social networks",
"authors": [
{
"first": "Abiola",
"middle": [],
"last": "Osho",
"suffix": ""
},
{
"first": "Ethan",
"middle": [],
"last": "Tucker",
"suffix": ""
},
{
"first": "George",
"middle": [],
"last": "Amariucai",
"suffix": ""
}
],
"year": 2020,
"venue": "ArXiv PrePrint",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Abiola Osho, Ethan Tucker, and George Amariucai. 2020. Implicit crowdsourcing for identifying abu- sive behavior in online social networks. In ArXiv PrePrint.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Reducing gender bias in abusive language detection",
"authors": [
{
"first": "Ji",
"middle": [],
"last": "Ho Park",
"suffix": ""
},
{
"first": "Jamin",
"middle": [],
"last": "Shin",
"suffix": ""
},
{
"first": "Pascale",
"middle": [],
"last": "Fung",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2799--2804",
"other_ids": {
"DOI": [
"10.18653/v1/D18-1302"
]
},
"num": null,
"urls": [],
"raw_text": "Ji Ho Park, Jamin Shin, and Pascale Fung. 2018. Re- ducing gender bias in abusive language detection. In Proceedings of the 2018 Conference on Empiri- cal Methods in Natural Language Processing, pages 2799-2804, Bxl, Belgium. ACL.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "A benchmark dataset for learning to intervene in online hate speech",
"authors": [
{
"first": "Jing",
"middle": [],
"last": "Qian",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Bethke",
"suffix": ""
},
{
"first": "Yinyin",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Elizabeth",
"middle": [],
"last": "Belding",
"suffix": ""
},
{
"first": "William",
"middle": [
"Yang"
],
"last": "Wang",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "4757--4766",
"other_ids": {
"DOI": [
"10.18653/v1/D19-1482"
]
},
"num": null,
"urls": [],
"raw_text": "Jing Qian, Anna Bethke, Yinyin Liu, Elizabeth Beld- ing, and William Yang Wang. 2019. A bench- mark dataset for learning to intervene in online hate speech. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natu- ral Language Processing (EMNLP-IJCNLP), pages 4757-4766.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Measuring the Reliability of Hate Speech Annotations: The Case of the European Refugee Crisis",
"authors": [
{
"first": "Bj\u00f6rn",
"middle": [],
"last": "Ross",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Rist",
"suffix": ""
},
{
"first": "Guillermo",
"middle": [],
"last": "Carbonell",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Cabrera",
"suffix": ""
},
{
"first": "Nils",
"middle": [],
"last": "Kurowsky",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Wojatzki",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.17185/duepublico/42132"
]
},
"num": null,
"urls": [],
"raw_text": "Bj\u00f6rn Ross, Michael Rist, Guillermo Carbonell, Ben- jamin Cabrera, Nils Kurowsky, and Michael Wo- jatzki. 2017. Measuring the Reliability of Hate Speech Annotations: The Case of the European Refugee Crisis.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "The risk of racial bias in hate speech detection",
"authors": [
{
"first": "Maarten",
"middle": [],
"last": "Sap",
"suffix": ""
},
{
"first": "Dallas",
"middle": [],
"last": "Card",
"suffix": ""
},
{
"first": "Saadia",
"middle": [],
"last": "Gabriel",
"suffix": ""
},
{
"first": "Yejin",
"middle": [],
"last": "Choi",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1668--1678",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1163"
]
},
"num": null,
"urls": [],
"raw_text": "Maarten Sap, Dallas Card, Saadia Gabriel, Yejin Choi, and Noah A. Smith. 2019. The risk of racial bias in hate speech detection. In Proceedings of the 57th Annual Meeting of the Association for Com- putational Linguistics, pages 1668-1678, FLR, Italy. ACL.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Boosting text classification performance on sexist tweets by text augmentation and text generation using a combination of knowledge graphs",
"authors": [
{
"first": "Sima",
"middle": [],
"last": "Sharifirad",
"suffix": ""
},
{
"first": "Borna",
"middle": [],
"last": "Jafarpour",
"suffix": ""
},
{
"first": "Stan",
"middle": [],
"last": "Matwin",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2nd Workshop on Abusive Language Online (ALW2)",
"volume": "",
"issue": "",
"pages": "107--114",
"other_ids": {
"DOI": [
"10.18653/v1/W18-5114"
]
},
"num": null,
"urls": [],
"raw_text": "Sima Sharifirad, Borna Jafarpour, and Stan Matwin. 2018. Boosting text classification performance on sexist tweets by text augmentation and text genera- tion using a combination of knowledge graphs. In Proceedings of the 2nd Workshop on Abusive Lan- guage Online (ALW2), pages 107-114, Brussels, Belgium. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "Studying generalisability across abusive language detection datasets",
"authors": [
{
"first": "Steve",
"middle": [
"Durairaj"
],
"last": "Swamy",
"suffix": ""
},
{
"first": "Anupam",
"middle": [],
"last": "Jamatia",
"suffix": ""
},
{
"first": "Bj\u00f6rn",
"middle": [],
"last": "Gamb\u00e4ck",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL)",
"volume": "",
"issue": "",
"pages": "940--950",
"other_ids": {
"DOI": [
"10.18653/v1/K19-1088"
]
},
"num": null,
"urls": [],
"raw_text": "Steve Durairaj Swamy, Anupam Jamatia, and Bj\u00f6rn Gamb\u00e4ck. 2019. Studying generalisability across abusive language detection datasets. In Proceed- ings of the 23rd Conference on Computational Nat- ural Language Learning (CoNLL), pages 940-950, Hong Kong, China. Association for Computational Linguistics.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "A dictionary-based approach to racism detection in Dutch social media",
"authors": [
{
"first": "St\u00e9phan",
"middle": [],
"last": "Tulkens",
"suffix": ""
},
{
"first": "Lisa",
"middle": [],
"last": "Hilte",
"suffix": ""
},
{
"first": "Elise",
"middle": [],
"last": "Lodewyckx",
"suffix": ""
},
{
"first": "Ben",
"middle": [],
"last": "Verhoeven",
"suffix": ""
},
{
"first": "Walter",
"middle": [],
"last": "Daelemans",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the LREC 2016 Workshop on Text Analytics for Cybersecurity and Online Safety",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "St\u00e9phan Tulkens, Lisa Hilte, Elise Lodewyckx, Ben Verhoeven, and Walter Daelemans. 2016. A dictionary-based approach to racism detection in Dutch social media. In Proceedings of the LREC 2016 Workshop on Text Analytics for Cybersecurity and Online Safety (TA-COS). European Language Resources Association (ELRA).",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "to target or not to target\": Identification and analysis of abusive text using ensemble of classifiers",
"authors": [
{
"first": "Gaurav",
"middle": [],
"last": "Verma",
"suffix": ""
},
{
"first": "Niyati",
"middle": [],
"last": "Chhaya",
"suffix": ""
},
{
"first": "Vishwa",
"middle": [],
"last": "Vinay",
"suffix": ""
}
],
"year": 2020,
"venue": "ICWSM'20 Safety Data Challenge",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gaurav Verma, Niyati Chhaya, and Vishwa Vinay. 2020. \"to target or not to target\": Identification and analysis of abusive text using ensemble of classifiers. In ICWSM'20 Safety Data Challenge.",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "Challenges and frontiers in abusive content detection",
"authors": [
{
"first": "Bertie",
"middle": [],
"last": "Vidgen",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Harris",
"suffix": ""
},
{
"first": "Dong",
"middle": [],
"last": "Nguyen",
"suffix": ""
},
{
"first": "Rebekah",
"middle": [],
"last": "Tromble",
"suffix": ""
},
{
"first": "Scott",
"middle": [],
"last": "Hale",
"suffix": ""
},
{
"first": "Helen",
"middle": [],
"last": "Margetts",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Third Workshop on Abusive Language Online",
"volume": "",
"issue": "",
"pages": "80--93",
"other_ids": {
"DOI": [
"10.18653/v1/W19-3509"
]
},
"num": null,
"urls": [],
"raw_text": "Bertie Vidgen, Alex Harris, Dong Nguyen, Rebekah Tromble, Scott Hale, and Helen Margetts. 2019. Challenges and frontiers in abusive content detec- tion. In Proceedings of the Third Workshop on Abusive Language Online, pages 80-93, FLR, Italy. ACL.",
"links": null
},
"BIBREF46": {
"ref_id": "b46",
"title": "Hate me, hate me not: Hate speech detection on facebook",
"authors": [
{
"first": "Fabio",
"middle": [
"Del"
],
"last": "Vigna",
"suffix": ""
},
{
"first": "Andrea",
"middle": [],
"last": "Cimino",
"suffix": ""
},
{
"first": "Felice",
"middle": [
"Dell"
],
"last": "Orletta",
"suffix": ""
}
],
"year": 2017,
"venue": "ITA-SEC 17",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fabio Del Vigna, Andrea Cimino, and Felice Dell Or- letta. 2017. Hate me, hate me not: Hate speech de- tection on facebook. In ITA-SEC 17, Venice.",
"links": null
},
"BIBREF47": {
"ref_id": "b47",
"title": "Detecting Hate Speech on the World Wide Web",
"authors": [
{
"first": "William",
"middle": [],
"last": "Warner",
"suffix": ""
},
{
"first": "Julia",
"middle": [],
"last": "Hirschberg",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 2012 Workshop on Language in Social Media",
"volume": "",
"issue": "",
"pages": "19--26",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "William Warner and Julia Hirschberg. 2012. Detecting Hate Speech on the World Wide Web. In Proceed- ings of the 2012 Workshop on Language in Social Media, pages 19-26.",
"links": null
},
"BIBREF48": {
"ref_id": "b48",
"title": "Are you a racist or am I seeing things? annotator influence on hate speech detection on Twitter",
"authors": [
{
"first": "Zeerak",
"middle": [],
"last": "Waseem",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the First Workshop on NLP and Computational Social Science",
"volume": "",
"issue": "",
"pages": "138--142",
"other_ids": {
"DOI": [
"10.18653/v1/W16-5618"
]
},
"num": null,
"urls": [],
"raw_text": "Zeerak Waseem. 2016. Are you a racist or am I seeing things? annotator influence on hate speech detection on Twitter. In Proceedings of the First Workshop on NLP and Computational Social Science, pages 138- 142, Austin, Texas. Association for Computational Linguistics.",
"links": null
},
"BIBREF49": {
"ref_id": "b49",
"title": "Understanding abuse: A typology of abusive language detection subtasks",
"authors": [
{
"first": "Zeerak",
"middle": [],
"last": "Waseem",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Davidson",
"suffix": ""
},
{
"first": "Dana",
"middle": [],
"last": "Warmsley",
"suffix": ""
},
{
"first": "Ingmar",
"middle": [],
"last": "Weber",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the First Workshop on Abusive Language Online",
"volume": "",
"issue": "",
"pages": "78--84",
"other_ids": {
"DOI": [
"10.18653/v1/W17-3012"
]
},
"num": null,
"urls": [],
"raw_text": "Zeerak Waseem, Thomas Davidson, Dana Warmsley, and Ingmar Weber. 2017. Understanding abuse: A typology of abusive language detection subtasks. In Proceedings of the First Workshop on Abusive Lan- guage Online, pages 78-84, Vancouver, BC, Canada. Association for Computational Linguistics.",
"links": null
},
"BIBREF50": {
"ref_id": "b50",
"title": "Hateful symbols or hateful people? predictive features for hate speech detection on Twitter",
"authors": [
{
"first": "Zeerak",
"middle": [],
"last": "Waseem",
"suffix": ""
},
{
"first": "Dirk",
"middle": [],
"last": "Hovy",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the NAACL Student Research Workshop",
"volume": "",
"issue": "",
"pages": "88--93",
"other_ids": {
"DOI": [
"10.18653/v1/N16-2013"
]
},
"num": null,
"urls": [],
"raw_text": "Zeerak Waseem and Dirk Hovy. 2016. Hateful sym- bols or hateful people? predictive features for hate speech detection on Twitter. In Proceedings of the NAACL Student Research Workshop, pages 88-93, San Diego, California. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF51": {
"ref_id": "b51",
"title": "Bridging the gaps: Multi task learning for domain transfer of hate speech detection",
"authors": [
{
"first": "Zeerak",
"middle": [],
"last": "Waseem",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Thorne",
"suffix": ""
},
{
"first": "Joachim",
"middle": [],
"last": "Bingel",
"suffix": ""
}
],
"year": 2018,
"venue": "Online Harassment. Human-Computer Interaction Series",
"volume": "",
"issue": "",
"pages": "29--55",
"other_ids": {
"DOI": [
"10.1007/978-3-319-78583-7_3"
]
},
"num": null,
"urls": [],
"raw_text": "Zeerak Waseem, James Thorne, and Joachim Bingel. 2018. Bridging the gaps: Multi task learning for domain transfer of hate speech detection. In Gol- beck J. (eds) Online Harassment. Human-Computer Interaction Series, pages 29-55, Cham. Springer In- ternational Publishing.",
"links": null
},
"BIBREF52": {
"ref_id": "b52",
"title": "Hate Speech on Twitter: A Pragmatic Approach to Collect Hateful and Offensive Expressions and Perform Hate Speech Detection",
"authors": [
{
"first": "Hajime",
"middle": [],
"last": "Watanabe",
"suffix": ""
},
{
"first": "Mondher",
"middle": [],
"last": "Bouazizi",
"suffix": ""
},
{
"first": "Tomoaki",
"middle": [],
"last": "Ohtsuki",
"suffix": ""
}
],
"year": 2018,
"venue": "IEEE Access",
"volume": "6",
"issue": "",
"pages": "13825--13835",
"other_ids": {
"DOI": [
"10.1109/ACCESS.2018.2806394"
]
},
"num": null,
"urls": [],
"raw_text": "Hajime Watanabe, Mondher Bouazizi, and Tomoaki Ohtsuki. 2018. Hate Speech on Twitter: A Prag- matic Approach to Collect Hateful and Offensive Expressions and Perform Hate Speech Detection. IEEE Access, 6:13825-13835.",
"links": null
},
"BIBREF53": {
"ref_id": "b53",
"title": "Detection of Abusive Language: the Problem of Biased Datasets",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Wiegand",
"suffix": ""
},
{
"first": "Josef",
"middle": [],
"last": "Ruppenhofer",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Kleinbauer",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "602--608",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Wiegand, Josef Ruppenhofer, and Thomas Kleinbauer. 2019. Detection of Abusive Language: the Problem of Biased Datasets. Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Hu- man Language Technologies, Volume 1, pages 602- 608.",
"links": null
},
"BIBREF54": {
"ref_id": "b54",
"title": "Detecting offensive tweets via topical feature discovery over a large scale twitter corpus",
"authors": [
{
"first": "Guang",
"middle": [],
"last": "Xiang",
"suffix": ""
},
{
"first": "Bin",
"middle": [],
"last": "Fan",
"suffix": ""
},
{
"first": "Ling",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Hong",
"suffix": ""
},
{
"first": "Carolyn",
"middle": [],
"last": "Rose",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 21st ACM International Conference on Information and Knowledge Management, CIKM '12",
"volume": "",
"issue": "",
"pages": "1980--1984",
"other_ids": {
"DOI": [
"10.1145/2396761.2398556"
]
},
"num": null,
"urls": [],
"raw_text": "Guang Xiang, Bin Fan, Ling Wang, Jason Hong, and Carolyn Rose. 2012. Detecting offensive tweets via topical feature discovery over a large scale twit- ter corpus. In Proceedings of the 21st ACM In- ternational Conference on Information and Knowl- edge Management, CIKM '12, page 1980-1984, NY, USA. ACM.",
"links": null
},
"BIBREF55": {
"ref_id": "b55",
"title": "Hate speech detection: A solved problem? the challenging case of long tail on twitter",
"authors": [
{
"first": "Ziqi",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Lei",
"middle": [],
"last": "Luo",
"suffix": ""
}
],
"year": 2018,
"venue": "Semantic Web",
"volume": "",
"issue": "",
"pages": "925--945",
"other_ids": {
"DOI": [
"10.3233/SW-180338"
]
},
"num": null,
"urls": [],
"raw_text": "Ziqi Zhang and Lei Luo. 2018. Hate speech detection: A solved problem? the challenging case of long tail on twitter. Semantic Web, page 925 -945.",
"links": null
},
"BIBREF56": {
"ref_id": "b56",
"title": "Detecting hate speech on twitter using a convolution-gru based deep neural network",
"authors": [
{
"first": "Ziqi",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Robinson",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "Tepper",
"suffix": ""
}
],
"year": 2018,
"venue": "The Semantic Web: European Semantic Web Conference",
"volume": "",
"issue": "",
"pages": "745--760",
"other_ids": {
"DOI": [
"https://link.springer.com/chapter/10.1007/978-3-319-93417-4_48"
]
},
"num": null,
"urls": [],
"raw_text": "Ziqi Zhang, David Robinson, and Jonathan Tepper. 2018. Detecting hate speech on twitter using a convolution-gru based deep neural network. In The Semantic Web: European Semantic Web Conference, pages 745-760, Cham. Springer International Pub- lishing.",
"links": null
}
},
"ref_entries": {
"TABREF0": {
"html": null,
"text": "5% of the conversations contained hate speech while about 43.2% of the comments are labelled as hateful. Each entry in the dataset is a conversation of several indexed comments. The index (in another column) is used to identify which comment is considered hateful, then a response intervention is provided. The entries with no hate speech do not have an intervention response. The number of responses do not correspond to the number of hateful comments in the conversation. The train set contains 5,424 comments while the test set contains 607 comments labelled as non-racist and racist. The dataset is not publicly available, however the dictionary used can be accessed at https://github.c om/clips/hades",
"type_str": "table",
"content": "<table><tr><td>6. DJURIC Dataset: (Djuric et al., 2015) col-agreement using the Cohen Kappa score of</td><td>Datasets Availability</td><td>Classes/Labels</td><td>Size</td><td>Format</td></tr><tr><td colspan=\"5\">4. FOUNTA Dataset 5 : (Founta et al., 2018) pub-lished a dataset of 80k tweets, annotated for various abusive behaviors (abusive, hateful speech, spam, normal) and made publicly available using TweetIDs. They use a boosted random sampling technique through an itera-tive and incremental process to generate the final dataset in order to improve the number of derogatory samples. They use a larger num-ber of annotators (20) through crowdsourc-ing. Their classes are None at 59%, Spam at 22.5%, Abusive at 11% and Hateful at 7.5%. Recently, as part of the ICWSM Data chal-lenge, an updated version of this dataset, now containing 100k was made available in text format. 5. WARNER Dataset: The constituent data was collated by (Warner and Hirschberg, 2012) from Yahoo News Group and URLs from the American Jewish Society. It contains 9000 paragraphs, manually annotated into seven (7) categories (anti-semitic, anti-black, anti-Asian, anti-woman, anti-Muslim, anti-immigrant or other hate(anti-gay and anti-white)). It doesn't seem to be publicly avail-fore, a conversation with 5 hateful comments can have just 3 responses to intervene. 10. HATEVAL Dataset 8 : This is a very small dataset for detecting hate speech against on Dutch Facebook pages most likely to con-tain derogatory statements such as a Belgian anti-islamic organization and a right-wing or-1 No Sexual Orientation --Race Disability Religion 2 Yes Racism 11.69% TweetID Sexism 20.00% Neither 68.33% 16,914 tweets Yes Hate Speech 5.77% Raw text 3 Offensive 77.43% Neither 16.80% 24k tweets 4 Yes Abusive 11% TweetID Hateful 7.5% Spam 22.5% Normal/None 59% 80,000 tweets No Anti-Semitic posts. 94.There-lected comments from the Yahoo Finance website. 56,280 comments were labeled as hateful while 895,456 labeled as clean from 209,776 users. 7. NOBATA Dataset: The authors in (Nobata et al., 2016) collected data from Yahoo Fi-nance and News comment section. Their definition of abusive language conflates hate speech, profanity and derogatory language. It was labelled as clean or abusive by Yahoo employees. In the primary dataset, 7.0% of Finance and 16.4% News comment were la-belled as abusive. In the temporal dataset, 3.4% of Finance and 10.7% News comment were labelled as abusive. The dataset was re-ported to be at https://webscope.sandbox .yahoo.com/, however it currently cannot be found. They reported an annotation agreement rate of 0.867 and Fleiss Kappa of 0.401. 8. ZHANG Dataset 6 : The authors in (Zhang et al., 2018) created a dataset using refugee and muslim specific words and hashtags from Twitter. The dataset contains 2,435 tweets with 414 labelled as hate and 2,021 labelled as non-hate. The dataset was initially pub-licly available but not anymore due to the data sharing policy of the authors' institution. 9. QIAN Dataset 7 : (Qian et al., 2019) collected data from Reddit and Gab including interven-tion responses written by humans. Their data preserves the conversational thread as a way to provide context. From Reddit, they col-lect 5,020 conversations which includes a to-tal of 22,324 comments labelled as hate or non-hate. 76.6% of the conversations contain hate speech while only 23.5% of the com-ments are labelled as hateful. They were mined from known toxic subbreddit using hate keywords. Similarly, from Gab, they col-lected 11,825 conversations containing 33,776 0.60. Finally, since hate speech can occur in dif-ferent modes such as text, images, audio and video, there are some multimodal datasets to address this issue: 4 The Need For A Benchmark Dataset Anti-Black women and immigrants. It contains English and Spanish tweets labelled into hateful or not 5 Anti-Asian Anti-Woman --16. MMHS150K Dataset 13 : (Gomez et al., 2019) Anti-Muslim made publicly available a multimodal (image Anti-Immigrant hateful. In other languages, hate speech detection re-Other hate and text) dataset collected from Twitter using No Hate Speech 5.91% -6 Clean 94.08% -Hatebase terms. It contains 150,000 tweets 951,736 comments search have also progressed. crawled and collected data from comments manually annotated into six classes of No at-tacks to any community, Racist, Sexist, Ho-mophobic, Religion based attacks or Attacks to other communities. 17. HATEFUL MEMES Dataset: Facebook AI (Kiela et al., 2020) collected a multimodal dataset for detecting and classification of hate speech containing images and text. It was an-notated using their specific definition of hate speech. It contains 10k memes with a 5% dev confounders were found for both modalities), unimodal hate (one or both modalities were already hateful on their own), benign text con-founder, benign image confounder, random non-hateful. A benign confounder is defined as \"a minimum replacement image or replace-ment text that flips the label for a given mul-timodal meme from hateful to non-hateful.\" They record a Cohen's kappa score (inter an-notators reliability) of 67.2%. The dataset is available upon joining a currently ongoing competition 14 . No Abusive 7 %of F + 16.4% of N -7 Clean 3.4 %of F + 10.7% of N -9 Yes Hate Speech 23.5% Non-Hate Speech 76.5% Raw text 22,324 Reddit comments 9 Yes Hate Speech 43.2% Non-Hate Speech 51.8% Raw text 33,776 Gab comments 11 Yes 6 Point Likert Scale --541 tweets 12 Yes Hate Speech 33% -Non-Hate Speech 67% 33 tweets 13 No No Hate Weak Hate Strong Hate --6,031 Facebook comments 11. , 2016) No Racist and 10% test set. The memes belong to the following classes: multimodal hate (benign Non-Racist --15 17,567 Facebook comments</td></tr><tr><td>In the field of ML, benchmark datasets are datasets</td><td colspan=\"4\">able. ganization. They recorded an inter-annotator</td></tr><tr><td>used to evaluate or compare the performance of</td><td/><td/><td/><td/></tr><tr><td>ML methods on a particular task. It is used</td><td/><td/><td/><td/></tr><tr><td>13 https://gombru.github.io/2019/10/09/</td><td/><td/><td/><td/></tr><tr><td>MMHS/</td><td/><td/><td/><td/></tr><tr><td>14 https://www.drivendata.org/competiti</td><td/><td/><td/><td/></tr><tr><td>ons/64/hateful-memes/page/205/</td><td/><td/><td/><td/></tr></table>",
"num": null
},
"TABREF1": {
"html": null,
"text": "Analysis of some of the existing hate speech datasets by researchers to test how their new ideas perform against existing ones",
"type_str": "table",
"content": "<table/>",
"num": null
},
"TABREF2": {
"html": null,
"text": "",
"type_str": "table",
"content": "<table><tr><td>: Varying Pre-processing Steps</td></tr></table>",
"num": null
},
"TABREF4": {
"html": null,
"text": "Benchmark criteria met by datasets",
"type_str": "table",
"content": "<table/>",
"num": null
}
}
}
}