ACL-OCL / Base_JSON /prefixA /json /alw /2020.alw-1.5.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
raw
history blame contribute delete
No virus
119 kB
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T02:11:51.263775Z"
},
"title": "HurtBERT: Incorporating Lexical Features with BERT for the Detection of Abusive Language",
"authors": [
{
"first": "Anna",
"middle": [],
"last": "Koufakou",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Florida Gulf Coast University",
"location": {
"addrLine": "Software Engineering Dept"
}
},
"email": "akoufakou@fgcu.edu\u2665pamungka"
},
{
"first": "\u2663",
"middle": [],
"last": "Endang",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Wahyu",
"middle": [],
"last": "Pamungkas",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "\u2665",
"middle": [],
"last": "Valerio Basile",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Viviana",
"middle": [],
"last": "Patti",
"suffix": "",
"affiliation": {},
"email": "patti@di.unito.it"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "The detection of abusive or offensive remarks in social texts has received significant attention in research. In several related shared tasks, BERT has been shown to be the state-of-theart. In this paper, we propose to utilize lexical features derived from a hate lexicon towards improving the performance of BERT in such tasks. We explore different ways to utilize the lexical features in the form of lexicon-based encodings at the sentence level or embeddings at the word level. We provide an extensive dataset evaluation that addresses in-domain as well as cross-domain detection of abusive content to render a complete picture. Our results indicate that our proposed models combining BERT with lexical features help improve over a baseline BERT model in many of our indomain and cross-domain experiments.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "The detection of abusive or offensive remarks in social texts has received significant attention in research. In several related shared tasks, BERT has been shown to be the state-of-theart. In this paper, we propose to utilize lexical features derived from a hate lexicon towards improving the performance of BERT in such tasks. We explore different ways to utilize the lexical features in the form of lexicon-based encodings at the sentence level or embeddings at the word level. We provide an extensive dataset evaluation that addresses in-domain as well as cross-domain detection of abusive content to render a complete picture. Our results indicate that our proposed models combining BERT with lexical features help improve over a baseline BERT model in many of our indomain and cross-domain experiments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The automatic classification of abusive and offensive language is a complex problem, that has raised a growing interest in the Natural Language Processing community in the last decade or so (Fortuna and Nunes, 2018; Vidgen et al., 2019; Poletto et al., 2020) . Several benchmarks have been introduced to measure the performance of mostly supervised machine learning systems tackling such problems as text classification tasks Zampieri et al., 2019b) . The evaluation of abusive and offensive language, however, is not straightforward. Among the issues, it has been observed how the topics discussed in the messages composing the benchmark datasets introduce biases, interfering with the modeling of the pure pragmatic phenomena by the supervised models trained on the respective training sets (Wiegand et al., 2019; Caselli et al., 2020) .",
"cite_spans": [
{
"start": 190,
"end": 215,
"text": "(Fortuna and Nunes, 2018;",
"ref_id": "BIBREF14"
},
{
"start": 216,
"end": 236,
"text": "Vidgen et al., 2019;",
"ref_id": "BIBREF29"
},
{
"start": 237,
"end": 258,
"text": "Poletto et al., 2020)",
"ref_id": "BIBREF25"
},
{
"start": 426,
"end": 449,
"text": "Zampieri et al., 2019b)",
"ref_id": "BIBREF36"
},
{
"start": 793,
"end": 815,
"text": "(Wiegand et al., 2019;",
"ref_id": "BIBREF32"
},
{
"start": 816,
"end": 837,
"text": "Caselli et al., 2020)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Among the recent neural architectures, BERT (Bidirectional Encoder Representations from Trans-formers (Devlin et al., 2019) ), is considered the state of the art in several NLP tasks, including abusive and offensive language detection. For example, in the SemEval 2019 Task 6 (Zampieri et al., 2019b, OffensEval) , seven out of the top-ten teams used BERT, including the top team. The knowledge encoded in such model, based on transformer neural networks, is induced by a pre-training performed on a large quantity of text, then fine-tuned to a specific dataset in order to learn complex correlations between the natural language and the labels. One disadvantage to models such as BERT is that no additional external knowledge is taken into consideration, such as linguistic information from a lexicon.",
"cite_spans": [
{
"start": 102,
"end": 123,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF10"
},
{
"start": 276,
"end": 312,
"text": "(Zampieri et al., 2019b, OffensEval)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we propose a hybrid methodology to infuse external knowledge into a supervised model for abusive language detection. We propose to add extra lexical features with BERT at sentenceor term-level, with the goal of improving the quality of its prediction of abusive language. In particular, we investigate the inclusion of features from a categorized lexicon in the domain of offensive and abusive language, with the aim of supporting transfer knowledge in that domain across datasets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We perform extensive, in-domain and crossdomain experimentation, to evaluate the performance of models which are trained on one dataset and tested on other datasets. Cross-domain classification of abusive content has been proposed to address the diverse topical focuses and targets as exhibited in different datasets developed from the research community in the last years (Karan and\u0160najder, 2018; Pamungkas and Patti, 2019; Pamungkas et al., 2020b) . For example, some datasets proposed for hate speech detection focus on racism or sexism (Waseem and Hovy, 2016) , in line with the target-oriented nature of hate speech, while others on offensive or abusive language without tar-geting a specific vulnerable group (Zampieri et al., 2019a; Caselli et al., 2020) . This makes it difficult to know if a model that performs well on one dataset will generalize well for other datasets. However, several actors -including institutions, NGO operators and ICT companies to comply to governments' demands for counteracting online abuse 1have an increasing need for automatic support to moderation (Shen and Rose, 2019; Chung et al., 2019) or for monitoring and mapping the dynamics and the diffusion of hate speech dynamics over a territory (Paschalides et al., 2020; Capozzi et al., 2019) considering different targets and vulnerable categories. In this scenario, there is a considerable urgency to investigate computational approaches for abusive language detection supporting the development of robust models, which can be used to detect abusive contents with different scope or topical focuses. When addressing this challenge, the motivation for our proposal is the hypothesis that the addition of lexical knowledge from an abusive lexicon will soften the topic bias issue (Wiegand et al., 2019) , making the model more stable against cross-domain evaluation. Our extensive experimentation with many different datasets shows that our proposed methods improve over the BERT baseline in the majority of the in-domain and cross-domain experiments.",
"cite_spans": [
{
"start": 373,
"end": 397,
"text": "(Karan and\u0160najder, 2018;",
"ref_id": null
},
{
"start": 398,
"end": 424,
"text": "Pamungkas and Patti, 2019;",
"ref_id": "BIBREF23"
},
{
"start": 425,
"end": 449,
"text": "Pamungkas et al., 2020b)",
"ref_id": "BIBREF22"
},
{
"start": 540,
"end": 563,
"text": "(Waseem and Hovy, 2016)",
"ref_id": "BIBREF31"
},
{
"start": 715,
"end": 739,
"text": "(Zampieri et al., 2019a;",
"ref_id": "BIBREF34"
},
{
"start": 740,
"end": 761,
"text": "Caselli et al., 2020)",
"ref_id": null
},
{
"start": 1089,
"end": 1110,
"text": "(Shen and Rose, 2019;",
"ref_id": "BIBREF27"
},
{
"start": 1111,
"end": 1130,
"text": "Chung et al., 2019)",
"ref_id": "BIBREF7"
},
{
"start": 1233,
"end": 1259,
"text": "(Paschalides et al., 2020;",
"ref_id": null
},
{
"start": 1260,
"end": 1281,
"text": "Capozzi et al., 2019)",
"ref_id": "BIBREF4"
},
{
"start": 1769,
"end": 1791,
"text": "(Wiegand et al., 2019)",
"ref_id": "BIBREF32"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The last ten years have seen a rapidly increasing amount of research work on the automatic detection of abusive and offensive language, as highlighted by the success of international evaluation campaigns such as HatEval on gender-or ethnic-based hate speech, OffensEval (Zampieri et al., 2019b , 2020) on offensive language, or AMI (Fersini et al., 2018a,b , Automatic Misogyny Identification) on misogyny. Several annotated corpora have also been established as benchmarks besides the data produced for the aforementioned shared tasks for several languages, for instance, Waseem et al. (2017) for racism and sexism in English, Sanguinetti et al. (2018) for hate speech Italian and Mubarak et al. (2017) for abusive language in Arabic. We refer to (Poletto et al., 2020) for a systematic and updated review of resources and benchmark corpora for hate speech 1 See for instance the Code of Conduct on countering illegal hate speech online issued by EU commission (EU Commission, 2016). detection across different languages.",
"cite_spans": [
{
"start": 270,
"end": 293,
"text": "(Zampieri et al., 2019b",
"ref_id": "BIBREF36"
},
{
"start": 332,
"end": 356,
"text": "(Fersini et al., 2018a,b",
"ref_id": null
},
{
"start": 573,
"end": 593,
"text": "Waseem et al. (2017)",
"ref_id": "BIBREF30"
},
{
"start": 628,
"end": 653,
"text": "Sanguinetti et al. (2018)",
"ref_id": "BIBREF26"
},
{
"start": 682,
"end": 703,
"text": "Mubarak et al. (2017)",
"ref_id": "BIBREF20"
},
{
"start": 748,
"end": 770,
"text": "(Poletto et al., 2020)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "The vast majority of approaches proposed in the literature are based on supervised learning, with statistical models learning the features of the target language and their relationship with the abusive phenomena from an annotated corpus. Most works propose variations on neural architectures such as Recurrent Neural Networks (especially Long Shortterm Memory networks), or Convolutional Neural Networks (Mishra et al., 2019) . An investigation on what type of attention mechanism (contextual vs. self-attention) is better for abusive language detection using deep learning architectures is proposed in (Chakrabarty et al., 2019) . Character-based models have also been proposed for this task .",
"cite_spans": [
{
"start": 404,
"end": 425,
"text": "(Mishra et al., 2019)",
"ref_id": "BIBREF19"
},
{
"start": 603,
"end": 629,
"text": "(Chakrabarty et al., 2019)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "More recently, models based on the Transformer neural network architecture have gained prominence, thanks to their ability of learning accurate language models from very large corpora in an unsupervised fashion, and then being fine-tuned to specific classification tasks, such as abusive language detection, with relatively little amount of annotated data. Several ideas have been proposed in the literature to improve the performance of BERT for abusive language detection. For example, finetuning large pre-trained language models in (Bodapati et al., 2019) .",
"cite_spans": [
{
"start": 536,
"end": 559,
"text": "(Bodapati et al., 2019)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "A complementary approach to supervised learning towards the detection of abusive and offensive language is the use of language resources such as lexicons and dictionaries. Wiegand et al. (2018) proposed a method to induce a list of English words to capture abusive language. introduced an English lexicon covering hate speech, racism, sexism, and homophobia. Other languages have relatively less resources with respect to English, apart perhaps from Arabic, for which two lexical resources are available, by Mubarak et al. (2017) , with focus on obscenity, and by Albadi et al. (2018) . A notable exception is HurtLex (Bassignana et al., 2018), a multilingual lexicon of offensive words, created by semi-automatically translating a handcrafted resource in Italian by linguist Tullio De Mauro (called Parole per Ferire, \"words to hurt\" (De Mauro, 2016)) into 53 languages. Lemmas in HurtLex are associated to 17 non-mutually exclusive categories, plus a binary macro-category indicating whether the lemma reflects a stereotype. The number of lemmas in any language of HurtLex is in the order of thousands, depending on the lan-guage, and they are divided into the four principal parts of speech: noun, adjective, verb, and adverb. In our earlier research, we used a technique called retrofitting to enhance word embeddings using HurtLex, for detecting aggression in English, Hindi, and Bengali (Koufakou et al., 2020) .",
"cite_spans": [
{
"start": 172,
"end": 193,
"text": "Wiegand et al. (2018)",
"ref_id": "BIBREF33"
},
{
"start": 508,
"end": 529,
"text": "Mubarak et al. (2017)",
"ref_id": "BIBREF20"
},
{
"start": 564,
"end": 584,
"text": "Albadi et al. (2018)",
"ref_id": "BIBREF0"
},
{
"start": 1393,
"end": 1416,
"text": "(Koufakou et al., 2020)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "In this work, we propose to infuse the lexical knowledge from HurtLex into a BERT model with the goal to improve the efficacy of abusive and offensive language prediction models. Specifically, we utilize different representations of the HurtLex categories as they are found in the data, utilize them with a BERT model, and explore how they affect the detection accuracy. To the best of our knowledge, the utilization of a hate lexicon, especially one that is based on this kind of structure with multiple categories, with a BERT model has not been explored before. We fully describe our methods in the following section.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "In this paper, we explore two models based on how they utilize the lexical features extracted from the hate speech lexicon, HurtLex (Bassignana et al., 2018) . Both of our proposed models utilize two inputs: (a) the sentence tokens (BERT's usual input), and (b) a vector we create based on the categories in HurtLex as they are found in our data. All the data we explore in this work are in English, so we used only the English version of HurtLex and leave the multilingual aspect for future work. Specifically, we use the English section of HurtLex version 1.2 2 . It contains 6,072 entries, of which 2,268 are in the conservative subset (these are terms with higher confidence). Table 1 lists the categories in HurtLex, with the number of terms in each one, as well as examples.",
"cite_spans": [
{
"start": 132,
"end": 157,
"text": "(Bassignana et al., 2018)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [
{
"start": 681,
"end": 688,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Methodology",
"sec_num": "3"
},
{
"text": "Our models both start using the BERT layer, which takes three inputs consisting of id, mask and segment -see Figures 1 and 2. The output of this BERT layer connects to a dense layer. Please note that specific details and parameters for the BERT Baseline as well as any layers in our models are presented in section 4.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "3"
},
{
"text": "Regarding HurtLex features, we have two ways of extracting features: encodings and embeddings. In the first architecture (see Figure 1 ), based on the words in the train set, we find their categories in HurtLex and then derive a vector of HurtLex cate-2 https://github.com/valeriobasile/ hurtlex gories: we call this HurtLex Encoding. The total number of categories in HurtLex is 17, so the dimensionality of the HurtLex encoding is 17. Each element in this vector is simply a frequency count for the respective category in HurtLex. For example, if there is a total of 3 words in a train record (e.g. tweet) that belong in the ethnic slurs category of HurtLex, then the corresponding element in the HurtLex encoding is 3. We call this architecture HurtBERT-Enc.",
"cite_spans": [],
"ref_spans": [
{
"start": 126,
"end": 134,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Methodology",
"sec_num": "3"
},
{
"text": "Our second model explores using HurtLex embeddings with an LSTM, as shown in Figure 2 . The HurtLex embedding is a 17-dimension one-hot encoding of the word presence in each of the lexicon categories. This model is named HurtBERT-Emb.",
"cite_spans": [],
"ref_spans": [
{
"start": 77,
"end": 85,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Methodology",
"sec_num": "3"
},
{
"text": "One of the main differences between the embedding and the encoding is that the embedding is word-level, while the encoding is commentlevel. Therefore, for the case of HurtLex encodings, one record (e.g. one tweet) generates one 17-dimensional vector (which we call HurtLex encoding). While, for the case of the HurtLex embeddings, every word in the comment has one 17dimensional vector representation (which is the HurtLex embedding). The HurtLex embedding also passes through an embedding layer, which goes into an LSTM and a dense layer, as shown in Figure 2 . In the end, the encoding is a simple representation that reflects if a category from the lexicon is found in the words of comment (or more accurately, how many times this category is found). While the embedding-based model also represents non-linear interactions between the features, that is, linguistically, the role of the HurtLex words in the sentence.",
"cite_spans": [],
"ref_spans": [
{
"start": 552,
"end": 560,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Methodology",
"sec_num": "3"
},
{
"text": "Finally, for both models, we concatenate the dense layer from the BERT output and the dense layer from the HurtLex output, before passing into a dense layer with sigmoid activation as the predictor layer (see the bottom part of both Figures 1 and 2) .",
"cite_spans": [],
"ref_spans": [
{
"start": 233,
"end": 250,
"text": "Figures 1 and 2)",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Methodology",
"sec_num": "3"
},
{
"text": "Overall, we follow the experimental setup of (Swamy et al., 2019) . We exploit the BERT pre-trained models available on tensorflow-hub 3 , which facilitate us to integrate BERT on top of Keras architecture 4 . Specifically, we use the bert-uncased model, with 12 transformer blocks, 12 self-attention heads, and hidden layer dimension 768. Based on performance in early experiments, our models use learning rate of e \u2212 5, batch size 32, and maximum sequence length of 50. We implement early stopping and model checkpoint based on the development set evaluation to avoid overfitting during the training process. For the LSTM in Figure 2 , we use 32 nodes, and the dense layers in Figures 1-2 are 256 and 16 nodes respectively (all dense layers except last have RELU activation).",
"cite_spans": [
{
"start": 45,
"end": 65,
"text": "(Swamy et al., 2019)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [
{
"start": 627,
"end": 635,
"text": "Figure 2",
"ref_id": null
},
{
"start": 679,
"end": 690,
"text": "Figures 1-2",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Experiments and Results",
"sec_num": "4"
},
{
"text": "The datasets used in our experiments are summarized in Table 2 . All the datasets we explore in this work are in English: we leave the multilin-gual aspect of this research to future work. Similar to previous work in cross-domain classification of abusive language, all datasets need to be cast into binary label as abusive (in bold in the Table) and not abusive. We split all datasets into training, development and test sets with the proportion of 70%, 10% and 20% respectively. We list and describe the datasets below in chronological order, as some of the datasets were built based on previous data or annotation schemes.",
"cite_spans": [],
"ref_spans": [
{
"start": 55,
"end": 62,
"text": "Table 2",
"ref_id": "TABREF2"
},
{
"start": 340,
"end": 346,
"text": "Table)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Datasets",
"sec_num": "4.1"
},
{
"text": "Waseem: This corpus was collected over a period of 2 months by using representative keywords which is frequently used to attack specific targets including religious, sexual, gender and ethnic minorities (Waseem and Hovy, 2016) . Two annotators were assigned to annotate the full dataset, with a third expert annotator reviewing their annotations. The final dataset consists of 16,914 tweets, with 3,383 instances targeting gender minorities (sexism), 1,972 labeled as racism, and 11,559 tweets neither sexist nor racist 5 .",
"cite_spans": [
{
"start": 203,
"end": 226,
"text": "(Waseem and Hovy, 2016)",
"ref_id": "BIBREF31"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "4.1"
},
{
"text": "Davidson: This dataset contains 24,783 tweets 6 manually rated with three labels including hate speech, offensive, and neither . The dataset was manually labelled by using the CrowdFlower platforms 7 , where each tweet was rated by at least three annotators. The final collection only contains 5.8% of total tweets as hate speech and 77.4% as offensive, while the remaining 16.8% were labelled as not offensive.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "4.1"
},
{
"text": "Founta: This dataset collection consists of 80,000 tweets annotated with 4 mutually exclusive labels including abusive, hateful, spam, and normal (Founta et al., 2018) . These tweets were gathered from the original corpus composed of 30 millions tweets was collected from 30 March 2017 to 9 April 2017. The annotation process was completed by five annotators and the final dataset is composed of 11% tweets labeled as abusive, 7.5% as hateful, 59% as normal, and 22.5% as spam.",
"cite_spans": [
{
"start": 146,
"end": 167,
"text": "(Founta et al., 2018)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "4.1"
},
{
"text": "HatEval: This dataset was used in the SemEval-2019 Task 5: Multilingual Detection of Hate Speech Against Immigrants and Women in Twitter . It contains about 12 thousand records and its labels are hateful or not. This dataset has also been evaluated for migrants and 5 We were able to retrieve only 16,488 instances (3,216 sexism, 1,957 racism and 11,315 neither) 6 We only found this number in https://github.com/t-davidson/ hate-speech-and-offensive-language",
"cite_spans": [
{
"start": 266,
"end": 267,
"text": "5",
"ref_id": null
},
{
"start": 363,
"end": 364,
"text": "6",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "4.1"
},
{
"text": "Label # Instances Target % Waseem (Waseem and Hovy, 2016) Racism, Sexism, None 16,488 31.4 Davidson Hate Speech, Offensive, Neither 24,783 83.2 Founta (Founta et al., 2018) Abusive, Hateful, Spam, Normal 99,799 18.5 HatEval Hateful, Not Hateful 11,971 42.0 OLID (Zampieri et al., 2019b) Offensive, Not Offensive 14,100 32.9 AbuseEval (Caselli et al., 2020) Abusive, Not Abusive 14,100 20.8 (Zampieri et al., 2019a) was used in SemEval-2019 Task 6: 'OffensEval' (Zampieri et al., 2019b) . It has Twitter data as the previous datasets, but it was annotated using a unique hierarchical model based on the proposed idea in (Waseem et al., 2017) . We use the Offensive and Not Offensive labeled data, where about 30% of the records are labeled as Offensive.",
"cite_spans": [
{
"start": 34,
"end": 57,
"text": "(Waseem and Hovy, 2016)",
"ref_id": "BIBREF31"
},
{
"start": 151,
"end": 172,
"text": "(Founta et al., 2018)",
"ref_id": "BIBREF15"
},
{
"start": 262,
"end": 286,
"text": "(Zampieri et al., 2019b)",
"ref_id": "BIBREF36"
},
{
"start": 334,
"end": 356,
"text": "(Caselli et al., 2020)",
"ref_id": null
},
{
"start": 390,
"end": 414,
"text": "(Zampieri et al., 2019a)",
"ref_id": "BIBREF34"
},
{
"start": 461,
"end": 485,
"text": "(Zampieri et al., 2019b)",
"ref_id": "BIBREF36"
},
{
"start": 619,
"end": 640,
"text": "(Waseem et al., 2017)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": null
},
{
"text": "AbuseEval: Caselli et al. (2020) created a new corpus by re-annotating OLID in order to model abusive language, seen as a correlated but independent phenomenon from offensive language. The annotation of abusiveness is carried out by three annotators at a coarse-grained, binary level (i.e., abusive vs. not abusive), and at a finer grain with the further distinction between implicit and explicit abusive language. Even though, as expected, there is overlap between offensive and abusive comments, a surprising number of instances labeled 'Offensive' in OLID were marked as 'Not Abusive' in AbuseEval.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": null
},
{
"text": "In our experiments, we train on the training set of each of the six datasets in Table 2 and test on all of the test sets as well as the Immigrant and Misogyny test sets of HatEval, denoted as 'HatEval Mig' and 'HatEval Mis' respectively, for a total of eight test sets.",
"cite_spans": [],
"ref_spans": [
{
"start": 80,
"end": 87,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "4.2"
},
{
"text": "Additionally, we run the experiments with each model a total of five times and present the average result. In summary, the results presented in this section are based on 720 experiments (3 models \u00d7 6 train sets \u00d7 8 test sets \u00d7 5 runs). About the variance of the results, the average standard deviation we observe is under 0.02 with very few exceptions: for example, AbuseEval results' standard deviation has an average of 0.03.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "4.2"
},
{
"text": "Results for all our experiments are shown in Table 3 . In this Table, we show the F1 macroaveraged results for our two models, HurtBERT-Enc (Encodings, see Figure 1 ) and HurtBERT-Emb (Embeddings, see Figure 2 ) versus the BERT baseline (refer to Section 3 for a description of all models).",
"cite_spans": [],
"ref_spans": [
{
"start": 45,
"end": 52,
"text": "Table 3",
"ref_id": "TABREF4"
},
{
"start": 63,
"end": 69,
"text": "Table,",
"ref_id": null
},
{
"start": 156,
"end": 164,
"text": "Figure 1",
"ref_id": "FIGREF0"
},
{
"start": 201,
"end": 209,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "4.2"
},
{
"text": "Starting with in-dataset experiments (shaded gray in Table 3 ), the results indicate that HurtBERT performs better than the baseline on 4 out of 6 datasets, namely AbuseEval, HatEval, OLID, and Waseem. In all four cases, HurtBERT-Emb is doing the best. The improvement in F1-macro is small in some cases (e.g., for Waseem, HurtBERT-Emb has 0.838 versus 0.836 for the baseline) and larger in others (e.g., for HatEval, HurtBERT-Emb has 0.562 versus 0.533 for the baseline).",
"cite_spans": [],
"ref_spans": [
{
"start": 53,
"end": 60,
"text": "Table 3",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "4.2"
},
{
"text": "As expected, based on previous studies, the vast majority of our out-domain results are lower than the in-domain ones. For example, for Davidson, the in-domain performance (training and testing on Davidson) is in the 90's, while the out-domain (training on other datasets and testing on Davidson) ranges from 40's to 70's. There are some exceptions, for example, training our models on Founta and testing on OLID has better performance than when training our models on OLID (e.g. see BERT Baseline results, 0.753 for Founta-trained versus 0.739 for OLID trained). This is on par with previous work (Swamy et al., 2019) : as they noted, there is similarity between these two datasets and Founta is a larger dataset (see Table 2 ).",
"cite_spans": [
{
"start": 287,
"end": 296,
"text": "Davidson)",
"ref_id": null
},
{
"start": 598,
"end": 618,
"text": "(Swamy et al., 2019)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [
{
"start": 719,
"end": 726,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "4.2"
},
{
"text": "When comparing our models with the baseline in the cross-dataset experiments, we observe that our two variants of HurtBERT obtain better results when fine-tuned on other datasets, in particular Davidson, OLID, and Waseem to a varying extent, while the results for the experiments with fine-tuning on HatEval are mixed. We observe some large improvements, for example, training on Waseem and testing on Davidson, the F1-macro for HurtBERT is 0.445 (based on Encodings) and 0.462 (based on Embeddings) versus 0.406 for the BERT baseline. On the other hand, most results trained on Davidson are relatively close to the baseline. A possible explanation for this empirical evidence may have its roots in the different nature of the phenomena modeled by the datasets employed in our experiments. In fact, HurtLex seems to provide more informative knowledge to the model when the goal task is to detect offensive language (e.g., OLID) rather than abusive language (e.g., AbuseEval). This would make sense given that the lexical resource comes from a lexicon of words used to explicitly express the intention to hurt, while AbusEval is much more about \"implicit\" abuse. We manually inspected some of the predictions of the models, with particular attention towards the instances that were misclassified by the baseline (BERT) and correctly classified by either HurtBERT-Enc or HurtBERT-Emb model. On HS data, we found many cases where swear words were present that are often used with nonoffensive function, according to the classification in Pamungkas et al. (2020a). The word \"b***h\" in particular is ubiquitous in this subset, see for instance the following tweets:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "4.2"
},
{
"text": "Me: these shoes look scary Me to me: you're a prison psychologist, suck it up, b***h When my sister and her boyfriend was arguing my nephew went upstairs & said \"my mama not a b***h or a h*e so you better watch yo mouth\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "4.2"
},
{
"text": "Love that u used WOMEN instead of b***h",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "4.2"
},
{
"text": "Our hypothesis is that the additional knowledge from HurtLex has a stabilizing effect on the representation of offensive terms, whereas the fully contextual embeddings of BERT tend to always understand such terms as offensive due to the sentencelevel context.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "4.2"
},
{
"text": "Comparing our two models (see the model diagrams in Figures 1 and 2) , we observe more improvements from HurtBERT-Emb (see again the results in Table 3 ). Over all the experiments, HurtBERT-Emb has the best (maximum) performance in 26 out of 48 experiments, versus 14 for HurtBERT-Enc out of 48 (there are a couple of ties in these numbers). When we look at the different training sets, the largest improvement overall is training on Waseem, where HurtBERT-Emb has the best performance among the three models in all 8 out of 8 experiments, versus only 3 out of 8 for HurtBERT-Enc. In other datasets, an example is training on OLID and testing on AbuseEval, the HurtBERT-Emb F1-macro is 0.680 versus 0.663 for the baseline and 0.666 for HurtBERT-Enc. Another example is training on Founta and testing on Hateval Mig: HurtBERT-Emb has 0.578 versus 0.542 for the baseline and 0.544 for HurtBERT-Enc. This seems to be expected, that a method based on word embeddings performs better than one based on a simple, numerical encoding which represents an entire comment. HurtLex embeddings go through an LSTM and dense layer (Fig. 2) , therefore, we expect this model to learn relationships among words in the context of the comments in the data. Nevertheless, there are cases where HurtBERT-Enc, does better; for example, training on AbuseEval and testing on Founta, HurtBERT-Enc has F1-macro of 0.715 vs 0.707 for the baseline and 0.702 for HurtBERT-Emb. This shows that, in some cases, a simple architecture with numerical encodings at the comment level can outperform the more sophisticated model based on embeddings.",
"cite_spans": [],
"ref_spans": [
{
"start": 52,
"end": 68,
"text": "Figures 1 and 2)",
"ref_id": "FIGREF0"
},
{
"start": 144,
"end": 151,
"text": "Table 3",
"ref_id": "TABREF4"
},
{
"start": 1116,
"end": 1124,
"text": "(Fig. 2)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "4.2"
},
{
"text": "In this work, we explore how to combine a BERT model with features extracted from a hate speech lexicon. The lexical features are extracted based on multiple categories in the lexicon and according to how these categories are found in the data. The lexical features can be represented at the comment level as simple numerical encodings or at the word level as embeddings that aim to learn the relationships of the lexical features in the context of the data. We conduct extensive experimentation, with in-domain as well as cross-domain training. We observe that our methods improve on the BERT baseline in the large majority of the cases, with high gains in some cases. It proves our hypothesis that the additional features from lexical knowledge can improve the BERT performance, providing a domain-agnostic feature in a cross-domain setting. For our future work, we will explore different languages to take advantage of the multilingual aspect of our lexicon. We also plan to delve deeper into the study of the relationships between our models and the linguistic aspects and phenomena in the various abusive and offensive datasets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "5"
},
{
"text": "https://www.tensorflow.org/hub 4 https://keras.io/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Now Figure Eight https://www.figure-eight. com/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "The work of Valerio Basile, Endang W. Pamungkas and Viviana Patti is partially funded by Progetto di Ateneo/CSP 2016 (Immigrants, Hate and Prejudice in Social Media, S1618.L2.BOSC.01) and by the project \"Be Positive!\" (under the 2019 \"Google.org Impact Challenge on Safety\" call).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Are they our brothers? analysis and detection of religious hate speech in the Arabic Twittersphere",
"authors": [
{
"first": "Nuha",
"middle": [],
"last": "Albadi",
"suffix": ""
},
{
"first": "Maram",
"middle": [],
"last": "Kurdi",
"suffix": ""
},
{
"first": "Shivakant",
"middle": [],
"last": "Mishra",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining, ASONAM 2018",
"volume": "",
"issue": "",
"pages": "69--76",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nuha Albadi, Maram Kurdi, and Shivakant Mishra. 2018. Are they our brothers? analysis and detection of religious hate speech in the Arabic Twittersphere. In Proceedings of the 2018 IEEE/ACM International Conference on Advances in Social Networks Analy- sis and Mining, ASONAM 2018, pages 69-76. IEEE.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "SemEval-2019 task 5: Multilingual detection of hate speech against immigrants and women in Twitter",
"authors": [
{
"first": "Valerio",
"middle": [],
"last": "Basile",
"suffix": ""
},
{
"first": "Cristina",
"middle": [],
"last": "Bosco",
"suffix": ""
},
{
"first": "Elisabetta",
"middle": [],
"last": "Fersini",
"suffix": ""
},
{
"first": "Debora",
"middle": [],
"last": "Nozza",
"suffix": ""
},
{
"first": "Viviana",
"middle": [],
"last": "Patti",
"suffix": ""
},
{
"first": "Francisco Manuel Rangel",
"middle": [],
"last": "Pardo",
"suffix": ""
},
{
"first": "Paolo",
"middle": [],
"last": "Rosso",
"suffix": ""
},
{
"first": "Manuela",
"middle": [],
"last": "Sanguinetti",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 13th International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "54--63",
"other_ids": {
"DOI": [
"10.18653/v1/S19-2007"
]
},
"num": null,
"urls": [],
"raw_text": "Valerio Basile, Cristina Bosco, Elisabetta Fersini, Debora Nozza, Viviana Patti, Francisco Manuel Rangel Pardo, Paolo Rosso, and Manuela San- guinetti. 2019. SemEval-2019 task 5: Multilin- gual detection of hate speech against immigrants and women in Twitter. In Proceedings of the 13th Inter- national Workshop on Semantic Evaluation, pages 54-63, Minneapolis, Minnesota, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Hurtlex: A multilingual lexicon of words to hurt",
"authors": [
{
"first": "Elisa",
"middle": [],
"last": "Bassignana",
"suffix": ""
},
{
"first": "Valerio",
"middle": [],
"last": "Basile",
"suffix": ""
},
{
"first": "Viviana",
"middle": [],
"last": "Patti",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Fifth Italian Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Elisa Bassignana, Valerio Basile, and Viviana Patti. 2018. Hurtlex: A multilingual lexicon of words to hurt. In Proceedings of the Fifth Italian Confer- ence on Computational Linguistics (CLiC-it 2018), Torino, Italy, December 10-12, 2018.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Neural word decomposition models for abusive language detection",
"authors": [
{
"first": "Sravan",
"middle": [],
"last": "Bodapati",
"suffix": ""
},
{
"first": "Spandana",
"middle": [],
"last": "Gella",
"suffix": ""
},
{
"first": "Kasturi",
"middle": [],
"last": "Bhattacharjee",
"suffix": ""
},
{
"first": "Yaser",
"middle": [],
"last": "Al-Onaizan",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Third Workshop on Abusive Language Online",
"volume": "",
"issue": "",
"pages": "135--145",
"other_ids": {
"DOI": [
"10.18653/v1/W19-3515"
]
},
"num": null,
"urls": [],
"raw_text": "Sravan Bodapati, Spandana Gella, Kasturi Bhattachar- jee, and Yaser Al-Onaizan. 2019. Neural word de- composition models for abusive language detection. In Proceedings of the Third Workshop on Abusive Language Online, pages 135-145, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Computational linguistics against hate: Hate speech detection and visualization on social media in the \"Contro L'Odio\" project",
"authors": [
{
"first": "T",
"middle": [
"E"
],
"last": "Arthur",
"suffix": ""
},
{
"first": "Mirko",
"middle": [],
"last": "Capozzi",
"suffix": ""
},
{
"first": "Valerio",
"middle": [],
"last": "Lai",
"suffix": ""
},
{
"first": "Fabio",
"middle": [],
"last": "Basile",
"suffix": ""
},
{
"first": "Manuela",
"middle": [],
"last": "Poletto",
"suffix": ""
},
{
"first": "Cristina",
"middle": [],
"last": "Sanguinetti",
"suffix": ""
},
{
"first": "Viviana",
"middle": [],
"last": "Bosco",
"suffix": ""
},
{
"first": "Giancarlo",
"middle": [],
"last": "Patti",
"suffix": ""
},
{
"first": "Cataldo",
"middle": [],
"last": "Ruffo",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Musto",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Polignano",
"suffix": ""
}
],
"year": 2019,
"venue": "6th Italian Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Arthur TE Capozzi, Mirko Lai, Valerio Basile, Fabio Poletto, Manuela Sanguinetti, Cristina Bosco, Vi- viana Patti, Giancarlo Ruffo, Cataldo Musto, Marco Polignano, et al. 2019. Computational linguistics against hate: Hate speech detection and visualiza- tion on social media in the \"Contro L'Odio\" project. In 6th Italian Conference on Computational Linguis- tics, CLiC-it 2019, Bari, Italy.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "2020. I feel offended, don't be abusive! implicit/explicit messages in offensive and abusive language",
"authors": [
{
"first": "Tommaso",
"middle": [],
"last": "Caselli",
"suffix": ""
},
{
"first": "Valerio",
"middle": [],
"last": "Basile",
"suffix": ""
},
{
"first": "Jelena",
"middle": [],
"last": "Mitrovi\u0107",
"suffix": ""
},
{
"first": "Inga",
"middle": [],
"last": "Kartoziya",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Granitzer",
"suffix": ""
}
],
"year": null,
"venue": "Proceedings of the 12th Language Resources and Evaluation Conference",
"volume": "",
"issue": "",
"pages": "6193--6202",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tommaso Caselli, Valerio Basile, Jelena Mitrovi\u0107, Inga Kartoziya, and Michael Granitzer. 2020. I feel of- fended, don't be abusive! implicit/explicit messages in offensive and abusive language. In Proceedings of the 12th Language Resources and Evaluation Con- ference, pages 6193-6202, Marseille, France. Euro- pean Language Resources Association.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Pay \"attention\" to your context when classifying abusive language",
"authors": [
{
"first": "Tuhin",
"middle": [],
"last": "Chakrabarty",
"suffix": ""
},
{
"first": "Kilol",
"middle": [],
"last": "Gupta",
"suffix": ""
},
{
"first": "Smaranda",
"middle": [],
"last": "Muresan",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Third Workshop on Abusive Language Online",
"volume": "",
"issue": "",
"pages": "70--79",
"other_ids": {
"DOI": [
"10.18653/v1/W19-3508"
]
},
"num": null,
"urls": [],
"raw_text": "Tuhin Chakrabarty, Kilol Gupta, and Smaranda Mure- san. 2019. Pay \"attention\" to your context when classifying abusive language. In Proceedings of the Third Workshop on Abusive Language Online, pages 70-79, Florence, Italy. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "CONAN -COunter NArratives through Nichesourcing: a Multilingual Dataset of Responses to Fight Online Hate Speech",
"authors": [
{
"first": "Yi-Ling",
"middle": [],
"last": "Chung",
"suffix": ""
},
{
"first": "Elizaveta",
"middle": [],
"last": "Kuzmenko",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Serra Sinem Tekiroglu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Guerini",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "2819--2829",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1271"
]
},
"num": null,
"urls": [],
"raw_text": "Yi-Ling Chung, Elizaveta Kuzmenko, Serra Sinem Tekiroglu, and Marco Guerini. 2019. CONAN - COunter NArratives through Nichesourcing: a Mul- tilingual Dataset of Responses to Fight Online Hate Speech. In Proceedings of the 57th Annual Meet- ing of the Association for Computational Linguis- tics, pages 2819-2829, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Automated hate speech detection and the problem of offensive language",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Davidson",
"suffix": ""
},
{
"first": "Dana",
"middle": [],
"last": "Warmsley",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Macy",
"suffix": ""
},
{
"first": "Ingmar",
"middle": [],
"last": "Weber",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the Eleventh International Conference on Web and Social Media",
"volume": "",
"issue": "",
"pages": "512--515",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas Davidson, Dana Warmsley, Michael Macy, and Ingmar Weber. 2017. Automated hate speech detection and the problem of offensive language. In Proceedings of the Eleventh International Con- ference on Web and Social Media, ICWSM 2017, Montr\u00e9al, Qu\u00e9bec, Canada, May 15-18, 2017, pages 512-515. AAAI Press.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Le parole per ferire. Internazionale",
"authors": [
{
"first": "Mauro",
"middle": [],
"last": "Tullio De",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tullio De Mauro. 2016. Le parole per ferire. Inter- nazionale. 27 settembre 2016.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1423"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Code of conduct on countering illegal hate speech online",
"authors": [],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "EU Commission. 2016. Code of conduct on countering illegal hate speech online.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "In Proceedings of Sixth Evaluation Campaign of Natural Language Processing and Speech Tools for Italian",
"authors": [
{
"first": "Elisabetta",
"middle": [],
"last": "Fersini",
"suffix": ""
},
{
"first": "Debora",
"middle": [],
"last": "Nozza",
"suffix": ""
},
{
"first": "Paolo",
"middle": [],
"last": "Rosso",
"suffix": ""
}
],
"year": 2018,
"venue": "CEUR Workshop Proceedings",
"volume": "2263",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Elisabetta Fersini, Debora Nozza, and Paolo Rosso. 2018a. Overview of the EVALITA 2018 Task on Automatic Misogyny Identification (AMI). In Pro- ceedings of Sixth Evaluation Campaign of Natural Language Processing and Speech Tools for Italian. Final Workshop (EVALITA 2018), volume 2263 of CEUR Workshop Proceedings. CEUR-WS.org.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Overview of the Task on Automatic Misogyny Identification at IberEval",
"authors": [
{
"first": "Elisabetta",
"middle": [],
"last": "Fersini",
"suffix": ""
},
{
"first": "Paolo",
"middle": [],
"last": "Rosso",
"suffix": ""
},
{
"first": "Maria",
"middle": [],
"last": "Anzovino",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Third Workshop on Evaluation of Human Language Technologies for Iberian Languages",
"volume": "2150",
"issue": "",
"pages": "1--15",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Elisabetta Fersini, Paolo Rosso, and Maria Anzovino. 2018b. Overview of the Task on Automatic Misog- yny Identification at IberEval 2018. In Proceed- ings of the Third Workshop on Evaluation of Hu- man Language Technologies for Iberian Languages (IberEval 2018), volume 2150 of CEUR Workshop Proceedings, pages 1-15. CEUR-WS.org.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "A survey on automatic detection of hate speech in text",
"authors": [
{
"first": "Paula",
"middle": [],
"last": "Fortuna",
"suffix": ""
},
{
"first": "S\u00e9rgio",
"middle": [],
"last": "Nunes",
"suffix": ""
}
],
"year": 2018,
"venue": "ACM Computing Surveys",
"volume": "",
"issue": "4",
"pages": "",
"other_ids": {
"DOI": [
"10.1145/3232676"
]
},
"num": null,
"urls": [],
"raw_text": "Paula Fortuna and S\u00e9rgio Nunes. 2018. A survey on au- tomatic detection of hate speech in text. ACM Com- puting Surveys, 51(4).",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Large scale crowdsourcing and characterization of twitter abusive behavior",
"authors": [
{
"first": "Antigoni-Maria",
"middle": [],
"last": "Founta",
"suffix": ""
},
{
"first": "Constantinos",
"middle": [],
"last": "Djouvas",
"suffix": ""
},
{
"first": "Despoina",
"middle": [],
"last": "Chatzakou",
"suffix": ""
},
{
"first": "Ilias",
"middle": [],
"last": "Leontiadis",
"suffix": ""
},
{
"first": "Jeremy",
"middle": [],
"last": "Blackburn",
"suffix": ""
},
{
"first": "Gianluca",
"middle": [],
"last": "Stringhini",
"suffix": ""
},
{
"first": "Athena",
"middle": [],
"last": "Vakali",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Sirivianos",
"suffix": ""
},
{
"first": "Nicolas",
"middle": [],
"last": "Kourtellis",
"suffix": ""
}
],
"year": 2018,
"venue": "International AAAI Conference on Web and Social Media",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Antigoni-Maria Founta, Constantinos Djouvas, De- spoina Chatzakou, Ilias Leontiadis, Jeremy Black- burn, Gianluca Stringhini, Athena Vakali, Michael Sirivianos, and Nicolas Kourtellis. 2018. Large scale crowdsourcing and characterization of twitter abusive behavior. In International AAAI Conference on Web and Social Media.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Cross-domain detection of abusive language online",
"authors": [
{
"first": "Mladen",
"middle": [],
"last": "Karan",
"suffix": ""
},
{
"first": "Jan\u0161najder",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2nd Workshop on Abusive Language Online (ALW2)",
"volume": "",
"issue": "",
"pages": "132--137",
"other_ids": {
"DOI": [
"10.18653/v1/W18-5117"
]
},
"num": null,
"urls": [],
"raw_text": "Mladen Karan and Jan\u0160najder. 2018. Cross-domain detection of abusive language online. In Proceed- ings of the 2nd Workshop on Abusive Language On- line (ALW2), pages 132-137, Brussels, Belgium. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "FlorUniTo@TRAC-2: Retrofitting word embeddings on an abusive lexicon for aggressive language detection",
"authors": [
{
"first": "Anna",
"middle": [],
"last": "Koufakou",
"suffix": ""
},
{
"first": "Valerio",
"middle": [],
"last": "Basile",
"suffix": ""
},
{
"first": "Viviana",
"middle": [],
"last": "Patti",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Second Workshop on Trolling, Aggression and Cyberbullying",
"volume": "",
"issue": "",
"pages": "106--112",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anna Koufakou, Valerio Basile, and Viviana Patti. 2020. FlorUniTo@TRAC-2: Retrofitting word em- beddings on an abusive lexicon for aggressive lan- guage detection. In Proceedings of the Second Workshop on Trolling, Aggression and Cyberbully- ing, pages 106-112, Marseille, France. European Language Resources Association (ELRA).",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Neural character-based composition models for abuse detection",
"authors": [
{
"first": "Pushkar",
"middle": [],
"last": "Mishra",
"suffix": ""
},
{
"first": "Helen",
"middle": [],
"last": "Yannakoudakis",
"suffix": ""
},
{
"first": "Ekaterina",
"middle": [],
"last": "Shutova",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2nd Workshop on Abusive Language Online (ALW2)",
"volume": "",
"issue": "",
"pages": "1--10",
"other_ids": {
"DOI": [
"10.18653/v1/W18-5101"
]
},
"num": null,
"urls": [],
"raw_text": "Pushkar Mishra, Helen Yannakoudakis, and Ekaterina Shutova. 2018. Neural character-based composi- tion models for abuse detection. In Proceedings of the 2nd Workshop on Abusive Language Online (ALW2), pages 1-10, Brussels, Belgium. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Tackling online abuse: A survey of automated abuse detection methods",
"authors": [
{
"first": "Pushkar",
"middle": [],
"last": "Mishra",
"suffix": ""
},
{
"first": "Helen",
"middle": [],
"last": "Yannakoudakis",
"suffix": ""
},
{
"first": "Ekaterina",
"middle": [],
"last": "Shutova",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1908.06024"
]
},
"num": null,
"urls": [],
"raw_text": "Pushkar Mishra, Helen Yannakoudakis, and Ekaterina Shutova. 2019. Tackling online abuse: A survey of automated abuse detection methods. arXiv preprint arXiv:1908.06024.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Abusive Language Detection on Arabic Social Media",
"authors": [
{
"first": "Hamdy",
"middle": [],
"last": "Mubarak",
"suffix": ""
},
{
"first": "Kareem",
"middle": [],
"last": "Darwish",
"suffix": ""
},
{
"first": "Walid",
"middle": [],
"last": "Magdy",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the First Workshop on Abusive Language Online",
"volume": "",
"issue": "",
"pages": "52--56",
"other_ids": {
"DOI": [
"10.18653/v1/W17-3008"
]
},
"num": null,
"urls": [],
"raw_text": "Hamdy Mubarak, Kareem Darwish, and Walid Magdy. 2017. Abusive Language Detection on Arabic So- cial Media. In Proceedings of the First Workshop on Abusive Language Online, pages 52-56, Van- couver, BC, Canada. Association for Computational Linguistics.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Do you really want to hurt me? predicting abusive swearing in social media",
"authors": [
{
"first": "Valerio",
"middle": [],
"last": "Endang Wahyu Pamungkas",
"suffix": ""
},
{
"first": "Viviana",
"middle": [],
"last": "Basile",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Patti",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of The 12th Language Resources and Evaluation Conference",
"volume": "2020",
"issue": "",
"pages": "6237--6246",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Endang Wahyu Pamungkas, Valerio Basile, and Vi- viana Patti. 2020a. Do you really want to hurt me? predicting abusive swearing in social media. In Pro- ceedings of The 12th Language Resources and Eval- uation Conference, LREC 2020, Marseille, France, May 11-16, 2020, pages 6237-6246. European Lan- guage Resources Association.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Misogyny detection in twitter: a multilingual and cross-domain study. Information Processing & Management",
"authors": [
{
"first": "Valerio",
"middle": [],
"last": "Endang Wahyu Pamungkas",
"suffix": ""
},
{
"first": "Viviana",
"middle": [],
"last": "Basile",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Patti",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1016/j.ipm.2020.102360"
]
},
"num": null,
"urls": [],
"raw_text": "Endang Wahyu Pamungkas, Valerio Basile, and Vi- viana Patti. 2020b. Misogyny detection in twitter: a multilingual and cross-domain study. Information Processing & Management, page 102360.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Cross-domain and cross-lingual abusive language detection: A hybrid approach with deep learning and a multilingual lexicon",
"authors": [
{
"first": "Wahyu",
"middle": [],
"last": "Endang",
"suffix": ""
},
{
"first": "Viviana",
"middle": [],
"last": "Pamungkas",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Patti",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics: Student Research Workshop",
"volume": "",
"issue": "",
"pages": "363--370",
"other_ids": {
"DOI": [
"10.18653/v1/P19-2051"
]
},
"num": null,
"urls": [],
"raw_text": "Endang Wahyu Pamungkas and Viviana Patti. 2019. Cross-domain and cross-lingual abusive language detection: A hybrid approach with deep learning and a multilingual lexicon. In Proceedings of the 57th Annual Meeting of the Association for Com- putational Linguistics: Student Research Workshop, pages 363-370, Florence, Italy.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Marios D. Dikaiakos, and Evangelos Markatos. 2020. Mandola: A big-data processing and visualization platform for monitoring and detecting online hate speech",
"authors": [
{
"first": "Demetris",
"middle": [],
"last": "Paschalides",
"suffix": ""
},
{
"first": "Dimosthenis",
"middle": [],
"last": "Stephanidis",
"suffix": ""
},
{
"first": "Andreas",
"middle": [],
"last": "Andreou",
"suffix": ""
},
{
"first": "Kalia",
"middle": [],
"last": "Orphanou",
"suffix": ""
},
{
"first": "George",
"middle": [],
"last": "Pallis",
"suffix": ""
}
],
"year": null,
"venue": "ACM Trans. Internet Technol",
"volume": "20",
"issue": "2",
"pages": "",
"other_ids": {
"DOI": [
"10.1145/3371276"
]
},
"num": null,
"urls": [],
"raw_text": "Demetris Paschalides, Dimosthenis Stephanidis, An- dreas Andreou, Kalia Orphanou, George Pallis, Mar- ios D. Dikaiakos, and Evangelos Markatos. 2020. Mandola: A big-data processing and visualization platform for monitoring and detecting online hate speech. ACM Trans. Internet Technol., 20(2).",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Resources and benchmark corpora for hate speech detection: a systematic review",
"authors": [
{
"first": "Fabio",
"middle": [],
"last": "Poletto",
"suffix": ""
},
{
"first": "Valerio",
"middle": [],
"last": "Basile",
"suffix": ""
},
{
"first": "Manuela",
"middle": [],
"last": "Sanguinetti",
"suffix": ""
},
{
"first": "Cristina",
"middle": [],
"last": "Bosco",
"suffix": ""
},
{
"first": "Viviana",
"middle": [],
"last": "Patti",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1007/s10579-020-09502-8"
]
},
"num": null,
"urls": [],
"raw_text": "Fabio Poletto, Valerio Basile, Manuela Sanguinetti, Cristina Bosco, and Viviana Patti. 2020. Resources and benchmark corpora for hate speech detection: a systematic review. Language Resources and Evalu- ation.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "An Italian Twitter Corpus of Hate Speech against Immigrants",
"authors": [
{
"first": "Manuela",
"middle": [],
"last": "Sanguinetti",
"suffix": ""
},
{
"first": "Fabio",
"middle": [],
"last": "Poletto",
"suffix": ""
},
{
"first": "Cristina",
"middle": [],
"last": "Bosco",
"suffix": ""
},
{
"first": "Viviana",
"middle": [],
"last": "Patti",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Stranisci",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC'18)",
"volume": "",
"issue": "",
"pages": "2798--2895",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Manuela Sanguinetti, Fabio Poletto, Cristina Bosco, Vi- viana Patti, and Marco Stranisci. 2018. An Ital- ian Twitter Corpus of Hate Speech against Immi- grants. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC'18), pages 2798-2895. European Language Resources Association (ELRA).",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "The discourse of online content moderation: Investigating polarized user responses to changes in Reddit's quarantine policy",
"authors": [
{
"first": "Qinlan",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Carolyn",
"middle": [],
"last": "Rose",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Third Workshop on Abusive Language Online",
"volume": "",
"issue": "",
"pages": "58--69",
"other_ids": {
"DOI": [
"10.18653/v1/W19-3507"
]
},
"num": null,
"urls": [],
"raw_text": "Qinlan Shen and Carolyn Rose. 2019. The discourse of online content moderation: Investigating polarized user responses to changes in Reddit's quarantine pol- icy. In Proceedings of the Third Workshop on Abu- sive Language Online, pages 58-69, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Studying generalisability across abusive language detection datasets",
"authors": [
{
"first": "Steve",
"middle": [
"Durairaj"
],
"last": "Swamy",
"suffix": ""
},
{
"first": "Anupam",
"middle": [],
"last": "Jamatia",
"suffix": ""
},
{
"first": "Bj\u00f6rn",
"middle": [],
"last": "Gamb\u00e4ck",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.18653/v1/K19-1088"
]
},
"num": null,
"urls": [],
"raw_text": "Steve Durairaj Swamy, Anupam Jamatia, and Bj\u00f6rn Gamb\u00e4ck. 2019. Studying generalisability across abusive language detection datasets. In Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL), Hong Kong, China.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Challenges and frontiers in abusive content detection",
"authors": [
{
"first": "Bertie",
"middle": [],
"last": "Vidgen",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Harris",
"suffix": ""
},
{
"first": "Dong",
"middle": [],
"last": "Nguyen",
"suffix": ""
},
{
"first": "Rebekah",
"middle": [],
"last": "Tromble",
"suffix": ""
},
{
"first": "Scott",
"middle": [],
"last": "Hale",
"suffix": ""
},
{
"first": "Helen",
"middle": [],
"last": "Margetts",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Third Workshop on Abusive Language Online",
"volume": "",
"issue": "",
"pages": "80--93",
"other_ids": {
"DOI": [
"10.18653/v1/W19-3509"
]
},
"num": null,
"urls": [],
"raw_text": "Bertie Vidgen, Alex Harris, Dong Nguyen, Rebekah Tromble, Scott Hale, and Helen Margetts. 2019. Challenges and frontiers in abusive content detec- tion. In Proceedings of the Third Workshop on Abu- sive Language Online, pages 80-93, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Understanding Abuse: A Typology of Abusive Language Detection Subtasks",
"authors": [
{
"first": "Zeerak",
"middle": [],
"last": "Waseem",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Davidson",
"suffix": ""
},
{
"first": "Dana",
"middle": [],
"last": "Warmsley",
"suffix": ""
},
{
"first": "Ingmar",
"middle": [],
"last": "Weber",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the First Workshop on Abusive Language Online",
"volume": "",
"issue": "",
"pages": "78--84",
"other_ids": {
"DOI": [
"10.18653/v1/W17-3012"
]
},
"num": null,
"urls": [],
"raw_text": "Zeerak Waseem, Thomas Davidson, Dana Warmsley, and Ingmar Weber. 2017. Understanding Abuse: A Typology of Abusive Language Detection Sub- tasks. In Proceedings of the First Workshop on Abu- sive Language Online, pages 78-84, Vancouver, BC, Canada. Association for Computational Linguistics.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Hateful symbols or hateful people? predictive features for hate speech detection on twitter",
"authors": [
{
"first": "Zeerak",
"middle": [],
"last": "Waseem",
"suffix": ""
},
{
"first": "Dirk",
"middle": [],
"last": "Hovy",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the American Chapter of the Association for Computational Linguistics NAACL Student Research Workshop",
"volume": "",
"issue": "",
"pages": "88--93",
"other_ids": {
"DOI": [
"10.18653/v1/N16-2013"
]
},
"num": null,
"urls": [],
"raw_text": "Zeerak Waseem and Dirk Hovy. 2016. Hateful sym- bols or hateful people? predictive features for hate speech detection on twitter. In Proceedings of the American Chapter of the Association for Computa- tional Linguistics NAACL Student Research Work- shop, pages 88-93, San Diego, California.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Detection of abusive language: the problem of biased datasets",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Wiegand",
"suffix": ""
},
{
"first": "Josef",
"middle": [],
"last": "Ruppenhofer",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Kleinbauer",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019",
"volume": "1",
"issue": "",
"pages": "602--608",
"other_ids": {
"DOI": [
"10.18653/v1/n19-1060"
]
},
"num": null,
"urls": [],
"raw_text": "Michael Wiegand, Josef Ruppenhofer, and Thomas Kleinbauer. 2019. Detection of abusive language: the problem of biased datasets. In Proceedings of the 2019 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), pages 602-608.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Inducing a lexicon of abusive words -a feature-based approach",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Wiegand",
"suffix": ""
},
{
"first": "Josef",
"middle": [],
"last": "Ruppenhofer",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Schmidt",
"suffix": ""
},
{
"first": "Clayton",
"middle": [],
"last": "Greenberg",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "1046--1056",
"other_ids": {
"DOI": [
"10.18653/v1/N18-1095"
]
},
"num": null,
"urls": [],
"raw_text": "Michael Wiegand, Josef Ruppenhofer, Anna Schmidt, and Clayton Greenberg. 2018. Inducing a lexicon of abusive words -a feature-based approach. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1046-1056, New Orleans, Louisiana. Association for Computational Linguistics.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Predicting the type and target of offensive posts in social media",
"authors": [
{
"first": "Marcos",
"middle": [],
"last": "Zampieri",
"suffix": ""
},
{
"first": "Shervin",
"middle": [],
"last": "Malmasi",
"suffix": ""
},
{
"first": "Preslav",
"middle": [],
"last": "Nakov",
"suffix": ""
},
{
"first": "Sara",
"middle": [],
"last": "Rosenthal",
"suffix": ""
},
{
"first": "Noura",
"middle": [],
"last": "Farra",
"suffix": ""
},
{
"first": "Ritesh",
"middle": [],
"last": "Kumar",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marcos Zampieri, Shervin Malmasi, Preslav Nakov, Sara Rosenthal, Noura Farra, and Ritesh Kumar. 2019a. Predicting the type and target of offensive posts in social media. In Proceedings of the 2019",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"authors": [],
"year": null,
"venue": "",
"volume": "1",
"issue": "",
"pages": "1415--1420",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 1415-1420.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Semeval-2019 task 6: Identifying and categorizing offensive language in social media (offenseval)",
"authors": [
{
"first": "Marcos",
"middle": [],
"last": "Zampieri",
"suffix": ""
},
{
"first": "Shervin",
"middle": [],
"last": "Malmasi",
"suffix": ""
},
{
"first": "Preslav",
"middle": [],
"last": "Nakov",
"suffix": ""
},
{
"first": "Sara",
"middle": [],
"last": "Rosenthal",
"suffix": ""
},
{
"first": "Noura",
"middle": [],
"last": "Farra",
"suffix": ""
},
{
"first": "Ritesh",
"middle": [],
"last": "Kumar",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 13th International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "75--86",
"other_ids": {
"DOI": [
"10.18653/v1/S19-2010"
]
},
"num": null,
"urls": [],
"raw_text": "Marcos Zampieri, Shervin Malmasi, Preslav Nakov, Sara Rosenthal, Noura Farra, and Ritesh Kumar. 2019b. Semeval-2019 task 6: Identifying and cate- gorizing offensive language in social media (offense- val). In Proceedings of the 13th International Work- shop on Semantic Evaluation, pages 75-86, Min- neapolis, Minnesota, USA. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Zeses Pitenis, and \u00c7 agr\u0131 \u00c7\u00f6ltekin. 2020. Semeval-2020 task 12: Multilingual offensive language identification in social media",
"authors": [
{
"first": "Marcos",
"middle": [],
"last": "Zampieri",
"suffix": ""
},
{
"first": "Preslav",
"middle": [],
"last": "Nakov",
"suffix": ""
},
{
"first": "Sara",
"middle": [],
"last": "Rosenthal",
"suffix": ""
},
{
"first": "Pepa",
"middle": [],
"last": "Atanasova",
"suffix": ""
},
{
"first": "Georgi",
"middle": [],
"last": "Karadzhov",
"suffix": ""
},
{
"first": "Hamdy",
"middle": [],
"last": "Mubarak",
"suffix": ""
},
{
"first": "Leon",
"middle": [],
"last": "Derczynski",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marcos Zampieri, Preslav Nakov, Sara Rosenthal, Pepa Atanasova, Georgi Karadzhov, Hamdy Mubarak, Leon Derczynski, Zeses Pitenis, and \u00c7 agr\u0131 \u00c7\u00f6ltekin. 2020. Semeval-2020 task 12: Multilingual offensive language identification in social media (offenseval 2020).",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"num": null,
"text": "HurtBERT-Enc, our model using HurtLex EncodingsFigure 2: HurtBERT-Emb, our model using HurtLex Embeddings",
"uris": null
},
"TABREF1": {
"content": "<table/>",
"type_str": "table",
"text": "Descriptions, number of terms and examples for the categories in HurtLex",
"num": null,
"html": null
},
"TABREF2": {
"content": "<table><tr><td>misogyny.</td></tr><tr><td>OLID: The Offensive Language Identification</td></tr><tr><td>Dataset</td></tr></table>",
"type_str": "table",
"text": "The datasets used in this paper (chronological order): labels, number of instances, and percent of records that are labeled abusive, offensive, or hateful.",
"num": null,
"html": null
},
"TABREF4": {
"content": "<table/>",
"type_str": "table",
"text": "The F1-macro results for all datasets. Shaded means in-dataset experiment. B stands for the baseline, HB-Enc stands for HurtBERT-Enc, and HB-Emb stands for HurtBERT-Emb. Bold indicates our model improves on the baseline; underlined indicates the best result (max). Each result is the average of five runs.",
"num": null,
"html": null
}
}
}
}