{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T02:11:42.832558Z" }, "title": "Attending the Emotions to Detect Online Abusive Language", "authors": [ { "first": "Niloofar", "middle": [ "Safi" ], "last": "Samghabadi", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Houston", "location": {} }, "email": "nsafisamghabadi@uh.edu" }, { "first": "Afsheen", "middle": [], "last": "Hatami", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Houston", "location": {} }, "email": "amhatami@uh.edu" }, { "first": "Mahsa", "middle": [], "last": "Shafaei", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Houston", "location": {} }, "email": "mshafaei@uh.edu" }, { "first": "Sudipta", "middle": [], "last": "Kar", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Houston", "location": {} }, "email": "" }, { "first": "Thamar", "middle": [], "last": "Solorio", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Houston", "location": {} }, "email": "tsolorio@uh.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "In recent years, abusive behavior has become a serious issue in online social networks. In this paper, we present a new corpus for the task of abusive language detection that is collected from a semi-anonymous online platform, and unlike the majority of other available resources, is not created based on a specific list of bad words. We also develop computational models to incorporate emotions into textual cues to improve aggression identification. We evaluate our proposed methods on a set of corpora related to the task and show promising results with respect to abusive language detection.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "In recent years, abusive behavior has become a serious issue in online social networks. In this paper, we present a new corpus for the task of abusive language detection that is collected from a semi-anonymous online platform, and unlike the majority of other available resources, is not created based on a specific list of bad words. We also develop computational models to incorporate emotions into textual cues to improve aggression identification. We evaluate our proposed methods on a set of corpora related to the task and show promising results with respect to abusive language detection.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Nowadays, abusive behavior has become a rising problem in online communities (Jones et al., 2013; Ybarra and Mitchell, 2004) . Such adverse behavior can have serious effects on the physical, mental, and social health of online users, among whom teenagers and young adults are the most vulnerable group. 1 To combat this problem at scale, automated Natural Language Processing (NLP) systems can help identify potentially abusive language.", "cite_spans": [ { "start": 77, "end": 97, "text": "(Jones et al., 2013;", "ref_id": "BIBREF13" }, { "start": 98, "end": 124, "text": "Ybarra and Mitchell, 2004)", "ref_id": "BIBREF32" }, { "start": 303, "end": 304, "text": "1", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In recent years, there have been several efforts to automate the detection of offensive language across social media platforms. Lexical features have been proven to work quite well for this task (Dinakar et al., 2012; Davidson et al., 2017) . However, such features introduce some bias into the systems by heavily relying on profane words, whereas reports show that most profanities are used in a neutral way in today's teen talks (Samghabadi et al., 2017; Vidgen et al., 2019) . The following examples signify the need for linguistically more sophisticated techniques beyond profanity dependent models to detect abusive language:", "cite_spans": [ { "start": 195, "end": 217, "text": "(Dinakar et al., 2012;", "ref_id": "BIBREF7" }, { "start": 218, "end": 240, "text": "Davidson et al., 2017)", "ref_id": "BIBREF5" }, { "start": 431, "end": 456, "text": "(Samghabadi et al., 2017;", "ref_id": "BIBREF24" }, { "start": 457, "end": 477, "text": "Vidgen et al., 2019)", "ref_id": "BIBREF29" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "1 http://enough.org/stats_cyberbullying Neutral: Damn you are such a BEAUTIFUL F*CKING MOMMY! Offensive: u should use ur hands to choke urself.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In fact, most of the resources available for abusive language detection have been created based on either a list of bad words or seed words related to abusive topics. In this paper, we aim at tackling this limitation by proposing a new method for sampling the data without focusing on a specific bad word list. We are interested to collect this new dataset from a social media website that is specifically popular among youth, since they are the most vulnerable group of users when it comes to online abuse.We scrape our data from Curious Cat, 2 a semi-anonymous question-answering website, that has increased in popularity among teenagers. This platform provides a way to interact anonymously, which opens the door for digital abuse. On this website, users can choose not to reveal any personal information on their account, as well as post comments/questions on other users' timelines anonymously. Additionally, on average, the posts are too short in length. These properties limit both the content of a post, as well as the information about the sender of that post.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "To overcome the aforementioned challenges within the data, we propose a new methodology to integrate emotional information into textual cues from the input text to decide whether it is offensive or not. Our main contributions in this paper are as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 We introduce a new corpus for the task of abusive language detection, which is not created based on a specific list of profane words.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 We develop approaches for incorporating emotions into textual information to improve abusive language detection, and create unified deep neural models that show promising results across several relevant corpora from various domains.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 We introduce Gated Emotion-Aware Attention (GEA) that dynamically learns the contribution of emotion and textual information to weigh the words inside a sequence. We show that this new attention mechanism significantly outperforms the regular attention, which only utilizes textual hidden representations to learn the word weights when the input text is short and noisy.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Abusive language identification and hate speech detection have been addressed by many research papers (Mishra et al., 2019c; Schmidt and Wiegand, 2017) . Most of the related works have employed feature engineering approaches, and use a combination of different types of lexical, syntactic, semantic, sentiment and lexicon-based features along with classic machine learning algorithms such as Support Vector Machines (SVM), and Logistic Regression (Samghabadi et al., 2018; Davidson et al., 2017; Nobata et al., 2016; Gitari et al., 2015; Van Hee et al., 2015) . Due to the popularity of deep neural networks, multiple studies have recently been conducted in order to explore the performance of these models on the task of aggression identification. Most of these studies are focused on hate speech detection within Twitter. Gamb\u00e4ck and Sikdar (2017) use a Convolutional Neural Network (CNN) based model, and investigate different textual and embedding features as the input to the model where word2vec produces the best results. Badjatiya et al. (2017) conduct an extensive evaluation on multiple traditional and deep learning approaches, and report the best results using an ensemble of LSTM and Gradient Boosted Decision Trees. There are also a few works that try to incorporate user information into the model, using approaches such as Graph Neural Networks (Mishra et al., 2019a,b; Ribeiro et al., 2018) to learn the structure of online communities along with the linguistic behaviors of the users within them. The main limitation of these approaches is that they are not applicable to the social media platforms that offer anonymity options to the users such as Curious Cat and ask.fm.", "cite_spans": [ { "start": 102, "end": 124, "text": "(Mishra et al., 2019c;", "ref_id": "BIBREF19" }, { "start": 125, "end": 151, "text": "Schmidt and Wiegand, 2017)", "ref_id": "BIBREF27" }, { "start": 447, "end": 472, "text": "(Samghabadi et al., 2018;", "ref_id": "BIBREF25" }, { "start": 473, "end": 495, "text": "Davidson et al., 2017;", "ref_id": "BIBREF5" }, { "start": 496, "end": 516, "text": "Nobata et al., 2016;", "ref_id": "BIBREF20" }, { "start": 517, "end": 537, "text": "Gitari et al., 2015;", "ref_id": "BIBREF12" }, { "start": 538, "end": 559, "text": "Van Hee et al., 2015)", "ref_id": null }, { "start": 1029, "end": 1052, "text": "Badjatiya et al. (2017)", "ref_id": "BIBREF1" }, { "start": 1361, "end": 1385, "text": "(Mishra et al., 2019a,b;", "ref_id": null }, { "start": 1386, "end": 1407, "text": "Ribeiro et al., 2018)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Several research papers have proven that emotion lexicons are helpful features for the tasks of abusive language and hate-speech detection (Koufakou and Scott, 2020; Wiegand et al., 2018; Martins et al., 2018; Corazza et al., 2018; Alorainy et al., 2018; Gao and Huang, 2017) . There is also one study that shows jointly modeling of emotion classification and abuse detection, through a multitask approach, can improve the performance of the latter task (Rajamanickam et al., 2020) .", "cite_spans": [ { "start": 139, "end": 165, "text": "(Koufakou and Scott, 2020;", "ref_id": "BIBREF15" }, { "start": 166, "end": 187, "text": "Wiegand et al., 2018;", "ref_id": "BIBREF30" }, { "start": 188, "end": 209, "text": "Martins et al., 2018;", "ref_id": "BIBREF16" }, { "start": 210, "end": 231, "text": "Corazza et al., 2018;", "ref_id": "BIBREF4" }, { "start": 232, "end": 254, "text": "Alorainy et al., 2018;", "ref_id": "BIBREF0" }, { "start": 255, "end": 275, "text": "Gao and Huang, 2017)", "ref_id": "BIBREF11" }, { "start": 454, "end": 481, "text": "(Rajamanickam et al., 2020)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Our methodology has two key differences in contrast to other existing methods: (1) Instead of using an ensemble approach, we create unified deep neural architectures that show very promising results across multiple domains, and (2) We do not use any user-level information in our model. Therefore, the model can be applied to various online platforms, even those that offer anonymity.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "We collected the data from Curious Cat, which is a semi-anonymous, question-answer social media platform. Curious Cat is very popular among the youth and has more than 15 million registered users. On this website, users can choose not to reveal any personal information on their account, as well as post comments/questions on other users' timelines anonymously. The anonymity option available on Curious Cat opens the door for digital abuse. Due to these properties, there are two significant limitations with respect to Curious Cat data: (1) The post content is usually too short making abuse detection harder, and (2) There is very limited information, if any, about the sender of a post.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dataset", "sec_num": "3" }, { "text": "We crawled around 500K English question-answer pairs from 2K randomly chosen users of Curious Cat. To avoid having bias through some specific swear words in the data, we did not use a particular list of bad words to find potentially offensive messages. Instead, we exploited the state-of-the-art classification method for abusive language detection on ask.fm (Samghabadi et al., 2017) 3 because of two reasons: (1) The format of the data in Curious Cat and ask.fm is very similar, 4 and (2) This method utilizes lexical features that make it capable of learning new words and phrases related to the offensive class. This model combines lexical, domain-specific, and emotion-related features and uses an SVM classifier to detect nastiness. We train that classifier on the full ask.fm dataset and apply it to Curious Cat to automatically label all rows of data. While ask.fm and Curious Cat have the same format, we noticed key differences between them, which may substantially affect the quality of automatic labeling. For instance, with Curious Cat, we observe numerous sexual posts that are full of profanities, yet not offensive to the user, e.g, a user may encourage others to post sexual comments to him/her, like the following example: Question: I wanna s*ck your d*ck so hard and taste your c*m. Answer: Enter my DMs beautiful.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data Collection and Annotation", "sec_num": "3.1" }, { "text": "Therefore, we randomly selected 2,482 questionanswer pairs, where 60% were chosen from the negative/offensive labeled data, and 40% selected from the positive/neutral labeled data (we only considered the label of the questions). Four inlab annotators 5 annotated the data. Each row was tagged by three different annotators, and the final label assigned to each instance by majority voting. Based on the annotations, the Fleiss's kappa (Fleiss, 1971 ) score is 0.5 that shows a moderate agreement among the annotators. Figure 1 shows the rate of \"complete agreement\" among all annotators for positive and negative questions and answers. By complete agreement, we mean the case where all the annotators assigned the same class to an instance (in Curious Cat data, an instance could be a question or an answer). Based on the figure, the complete agreement on the negative/offensive class is much less than the positive/neutral one. This observation demonstrates the fact that the perceived level of aggression is very subjective, so our final agreement score is reasonable Sap et al. (2019) . It is also interesting that for negative instances, the annotation results show more complete agreements on top of the questions compared to answers. This indicates that it was more difficult for the annotators to decide whether a reply to a comment is offensive. Table 1 shows the final distribution of the proposed Curious Cat corpus. Statistics show that 95% of negative comments were posted on users' timelines anonymously. Looking at the labeled data, we also found that about 100 instances of abusive posts do not include any profanities, and 1327 pos- 5 Including one graduate and three undergraduate students itive/neutral posts have at least one profane word. It shows that the proposed sampling method could capture the implicit forms of abusive language as well as explicit ones. This technique also samples the posts that include bad words, but are not attacking other users. ", "cite_spans": [ { "start": 435, "end": 448, "text": "(Fleiss, 1971", "ref_id": "BIBREF9" }, { "start": 1070, "end": 1087, "text": "Sap et al. (2019)", "ref_id": "BIBREF26" }, { "start": 1649, "end": 1650, "text": "5", "ref_id": null } ], "ref_spans": [ { "start": 518, "end": 526, "text": "Figure 1", "ref_id": "FIGREF0" }, { "start": 1354, "end": 1361, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Data Collection and Annotation", "sec_num": "3.1" }, { "text": "We also experimented with the following available corpora to better qualify the performance of the proposed models: (1) ask.fm dataset (Samghabadi et al., 2017), (2) Kaggle insult dataset, 6 and (3) Wikipedia personal attacks dataset (Wulczyn et al., 2017) . Table 2 compares all resources that we use in this paper. Our Curious Cat data can be accessed through our website. 7 ter offensive language recognition. For capturing emotions from the text, we use DeepMoji (Felbo et al., 2017) pre-trained on Twitter data. As for the output, this model creates a representation for 64 frequently used online emojis that shows how relevant each emoji is to a given text. Figure 2 illustrates the top 5 emojis that DeepMoji assigned to one neutral and one offensive instances in our Curious Cat data. Both of these comments are very short and include the bad word \"die\". We can see that DeepMoji correctly recognized the tone of the language in both examples. The colors also show the attention weights assigned by DeepMoji model. The darker colors indicate higher attention weights. Interestingly, the word \"die\" is attended the most in the offensive instance. In this paper, we examine two different approaches to create the model that combines Deep-Moji and textual representations to detect whether a given input text is offensive or not. The motivation behind this idea is to exploit emotional representation to better distinguish the use of profanities in an offensive way from a neutral way. Both models include the two following main modules:", "cite_spans": [ { "start": 234, "end": 256, "text": "(Wulczyn et al., 2017)", "ref_id": "BIBREF31" }, { "start": 467, "end": 487, "text": "(Felbo et al., 2017)", "ref_id": "BIBREF8" } ], "ref_spans": [ { "start": 259, "end": 266, "text": "Table 2", "ref_id": "TABREF3" }, { "start": 664, "end": 672, "text": "Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Other Abusive Language Datasets", "sec_num": "3.3" }, { "text": "1. Bidirectional Long Short-Term Memory (BiLSTM): This module has an embedding layer that generates the corresponding embedding matrix for the given input text. Then, we pass the embedding vectors to a Bidirectional LSTM (BiLSTM) layer to extract the contextual information from the sequences of words.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data", "sec_num": null }, { "text": "2. DeepMoji: This module feeds the input to the DeepMoji model and pass the last hidden representation through a non-linear layer to project it into the same space as the output from the BiLSTM module.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data", "sec_num": null }, { "text": "For combining the output of the above mentioned modules, we try two following approaches:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data", "sec_num": null }, { "text": "Concatenation: One popular way to incorporate information into deep neural models is concatenation. In this approach, we pass the output of BiLSTM to an attention layer, same as Bahdanau et al. (2015) , to aggregate the output hidden states of BiLSTM into a single vector. Within this layer, we calculate the weighted sum of r", "cite_spans": [ { "start": 178, "end": 200, "text": "Bahdanau et al. (2015)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Data", "sec_num": null }, { "text": "= i \u03b1 i h i , where h i = [ \u2212 \u2192 h i ; \u2190 \u2212 h i ]", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data", "sec_num": null }, { "text": "is the concatenation of the forward and backward hidden states of BiLSTM. \u03b1 i stands for the relative importance of words which is measured as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u03b1 i = sof tmax(v T tanh(W h h i + b h ))", "eq_num": "(1)" } ], "section": "Data", "sec_num": null }, { "text": "where W h is the weight matrix, and b h and v are the parameters of the model. We refer to this attention model as the Regular Attention (RA) in the rest of paper. We concatenate the outputs of the RA and DeepMoji module. The resulting vector is then fed into a hidden dense layer with 100 neurons. To improve generalization of the model, we use batch normalization and dropout with a rate of 0.5 after the hidden layer. Finally, we use a two neuron output layer along with softmax activation to predict whether the input text is offensive or not.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data", "sec_num": null }, { "text": "Gated Emotion-Aware Attention (GEA): In this approach, instead of directly concatenating the text and DeepMoji representations, we hypothesize that it is not enough to only focus on the word representations in the attention model because of two reasons: (1) Many bad words may also be used in a neutral way to make jokes and provide compliments among friends, and (2) Some texts do not contain any profanities, but are still offensive to the receiver. Both reasons may confuse the model for final prediction. Therefore, we design the GEA mechanism to consider not only the word representations, but also the emotions behind the text to better determine the most relevant words in a post. We use the idea of Gated Multimodal Unit (Ovalle et al., 2017) to create GEA. The overall architecture of this model is shown in Figure 3 . Let us assume that h i and e i are the output representations of BiLSTM and DeepMoji modules, respectively. For each of them, we have a gate neuron (represented by \u03c3 nodes in Figure 3 ) that controls the contribution of each of these features to calculate the attention weights. We calculate the \u03b1 i as follows:", "cite_spans": [], "ref_spans": [ { "start": 817, "end": 825, "text": "Figure 3", "ref_id": "FIGREF2" }, { "start": 1003, "end": 1011, "text": "Figure 3", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Data", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "h i = tanh(W h .h i )", "eq_num": "(2)" } ], "section": "Data", "sec_num": null }, { "text": "e i = tanh(W e .e i ) ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "z i = \u03c3(W z .[h i , e i ])", "eq_num": "(4)" } ], "section": "Data", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "hid i = z i * h i + (1 \u2212 z i ) * e i", "eq_num": "(5)" } ], "section": "Data", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u03b1 i = sof tmax(v T hid i )", "eq_num": "(6)" } ], "section": "Data", "sec_num": null }, { "text": "where {W h , W e , W z } are weight matrices, and v is the parameters of the model. W e is shared across the words and adds emotion effects to the attention weights. The output of the attention layer is the weighted sum r calculated as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "r = i \u03b1 i h i", "eq_num": "(7)" } ], "section": "Data", "sec_num": null }, { "text": "Finally, we pass the output of the attention mechanism to a fully connected layer with the same settings as the Concatenation model, and generate a two-dimensional output.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data", "sec_num": null }, { "text": "We stratified split Curious Cat data into train and test sets with a 70:30 training to test ratio, and use 20% of the train data as the validation set. For the other corpora, we use the same train, validation, and test folds as used by the original papers. As for preprocessing, we truncate the posts to 200 tokens, and right-pad the shorter sequences with zeros. We use Binary Cross Entropy to compute the loss between predicted and actual labels. To smooth the imbalance problem in the datasets, we add information about class weights to the loss function. The network weights were updated using Adam optimizer (Kingma and Ba, 2015) with a learning rate of 1e \u22125 . We trained the model over 200 epochs, and reported the test results based on the best macro F1 obtained from the validation set.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments and Results", "sec_num": "5" }, { "text": "We compared our proposed model against the stateof-the-art and several strong baselines listed bellow:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baselines and SOTA Approaches", "sec_num": "5.1" }, { "text": "DeepMoji Baseline: We directly passed the output of the DeepMoji module to the dense and output layers. The motivation behind this baseline was to estimate the power of the DeepMoji model to detect abusive language on its own.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baselines and SOTA Approaches", "sec_num": "5.1" }, { "text": "BiLSTM + RA: In this baseline, we do the classification, only using the textual information. This model uses the RA on top of BiLSTM module and directly passes the output representation to the fully connected and output layers. The motivation behind this model is to compare the performance of RA with GEA.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baselines and SOTA Approaches", "sec_num": "5.1" }, { "text": "BERT Baseline: We directly passed the hidden representation of the BERT last layer for [CLS] token to the dense and output layer. With this model, we aim at testing the power of BERT as a feature extractor for the task of abusive language detection.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baselines and SOTA Approaches", "sec_num": "5.1" }, { "text": "Sam'17 (Samghabadi et al., 2017) : This is the state-of-the-art for the ask.fm corpus and applies an SVM classifier on top of a combination of various features.", "cite_spans": [ { "start": 7, "end": 32, "text": "(Samghabadi et al., 2017)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Baselines and SOTA Approaches", "sec_num": "5.1" }, { "text": "Kaggle Winner: It shows the results of the winner of Kaggle competition on detecting insults in social commentary. This model includes an ensemble of several machine learning classifier with word n-grams and character n-grams lexical features. 8 Bodapati'19 (Bodapati et al., 2019) : This work reported the state-of-the-art results on the Wikipedia dataset. The authors added a single dense layer on top of BERT to fine-tune it for the task of abusive language detection. We implemented this model ourselves since the code was not released.", "cite_spans": [ { "start": 244, "end": 245, "text": "8", "ref_id": null }, { "start": 258, "end": 281, "text": "(Bodapati et al., 2019)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Baselines and SOTA Approaches", "sec_num": "5.1" }, { "text": "For the evaluation, we use the F1 score for the negative/offensive class, since this is the class of interest. We also report the weighted F1 score, which calculates the average performance over both classes. This is to ensure that the model does not sacrifice the positive/neutral class to increase the performance of the negative class. The nature of the data could be different across various domains. For example, in Curious Cat and ask.fm data, informal language is used more often than Kaggle and Wikipedia. Therefore, the type of embeddings we use in our experiments could be an important factor for the final performance. We plan to use BERT language model in our experiments as the embeddings; however, we prefer not to finetune BERT weights because of the computational cost. Therefore, we run our BiLSTM + RA baseline with the two following embedding models to see which one works best across all corpora:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Classification Results", "sec_num": "5.2" }, { "text": "1. 200-dimensional Glove 9 embeddings trained on Twitter 2. BERT base (uncased) contextualized embeddings trained on the BookCorpus and English Wikipedia corpus (Devlin et al., 2019) . 10", "cite_spans": [ { "start": 161, "end": 182, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Classification Results", "sec_num": "5.2" }, { "text": "Based on the results shown in Table 3 , it seems that BERT performs better than Glove embeddings across all datasets, despite the fact that we do not fine-tune its weights. Therefore, we use BERT as the embeddings in the rest of the experiments. Table 3 : Comparison between Glove and BERT embeddings using BiLSTM + RA baseline. We do not fine-tune BERT in our experiments and only use it as a feature extractor. Table 4 compares the performance of GEA and RA attention mechanisms. For Curious Cat and ask.fm corpora, BiLSTM + GEA model performs significantly 11 better than BiLSTM + RA, which demonstrates the effectiveness of our proposed attention to detect offensive language in short and noisy texts. BiLSTM + RA shows slightly better performance on Kaggle, as well as significant improvement on Wikipedia datasets in comparison 9 https://nlp.stanford.edu/projects/ glove 10 We only use BERT as a feature extractor. 11 All the significant testing are done using Mcnemar test.", "cite_spans": [], "ref_spans": [ { "start": 30, "end": 37, "text": "Table 3", "ref_id": null }, { "start": 246, "end": 253, "text": "Table 3", "ref_id": null }, { "start": 413, "end": 420, "text": "Table 4", "ref_id": "TABREF6" } ], "eq_spans": [], "section": "Classification Results", "sec_num": "5.2" }, { "text": "with BiLSTM + GEA. This observation could be explained by the following reason: the length of documents are longer in Kaggle and Wikipedia compared to Curious Cat and ask.fm. Therefore, the DeepMoji module which is trained on short tweets has probably some difficulties to generate the emotion representation for Kaggle and Wikipedia data. Table 5 shows the classification results, including the performance of our proposed models, Baselines, and state-of-the-art approaches across all four different corpora. For the Curious Cat data, Deep-Moji Baseline shows very promising results. This model performs significantly better than fine-tuned BERT (Bodapati'19) , which shows the power of DeepMoji representations. Combining the text and emotion information through either BiLSTM + RA + DeepMoji, or BiLSTM + GEA models produces results that are slightly better than DeepMoji Baseline.", "cite_spans": [ { "start": 647, "end": 660, "text": "(Bodapati'19)", "ref_id": null } ], "ref_spans": [ { "start": 340, "end": 347, "text": "Table 5", "ref_id": "TABREF8" } ], "eq_spans": [], "section": "Classification Results", "sec_num": "5.2" }, { "text": "For the ask.fm corpus, BiLSTM + RA + Deep-Moji and BiLSTM + GEA + DeepMoji indicate almost similar performance. The former performs slightly better on the negative/offensive class (showing a higher F1), while the latter works better on the positive/neutral class (having a higher weighted F1, as well as a very promising F1). The reported results for both models are significantly better than the state-of-the-art results on ask.fm (Sam'17), DeepMoji baseline, and finetuned BERT (Bodapati'19) that prove the effectiveness of our proposed approaches to integrate emotion information into the textual representation.", "cite_spans": [ { "start": 480, "end": 493, "text": "(Bodapati'19)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Classification Results", "sec_num": "5.2" }, { "text": "For Kaggle, Bodapati'19 reports best results. However, the performance of that model compared to our best model, BERT Baseline + DeepMoji, is not significantly better under the Mcnemar test. Although none of our main models (BiLSTM + EA + DeepMoji and BiLSTM + GEA) is the winner for Kaggle, still, the best performing model across our proposed approaches and baselines (i.e., BERT Baseline + DeepMoji) has DeepMoji as part of its Bodapati et al. (2019) report the weighted F1 of 95.7 as the state-of-the-art results. However, when we re-implement their model, we achieve a slightly better weighted F1 of 95.9 as what we report in Table 5 . Although we achieve the same weighted F1 of 95.9 with BiLSTM + RA model, we can see that the F1 for the offensive class is around 1% worse than Bodapati'19 , indicating that our model probably works better for the neutral class. For this corpus, it seems that integrating the emotion information into the model decreases the performance, which is inline with what we observe in Table 4 . A possible reason for this is that the Wikipedia corpus, in nature, is very similar to the data used for pre-training BERT, and is very different from the Twitter data used for pre-training DeepMoji. Therefore, in this case, the text representation generated by BERT is more powerful than the DeepMoji representation. Then, combining these two representations does not improve the results.", "cite_spans": [ { "start": 431, "end": 453, "text": "Bodapati et al. (2019)", "ref_id": "BIBREF3" }, { "start": 785, "end": 796, "text": "Bodapati'19", "ref_id": null } ], "ref_spans": [ { "start": 631, "end": 638, "text": "Table 5", "ref_id": "TABREF8" }, { "start": 1019, "end": 1026, "text": "Table 4", "ref_id": "TABREF6" } ], "eq_spans": [], "section": "Classification Results", "sec_num": "5.2" }, { "text": "Overall, we can conclude that:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Classification Results", "sec_num": "5.2" }, { "text": "1. For short and noisy text data like Curious Cat and ask.fm, integrating the emotion information (by DeepMoji representation) into the textual representation produces the best results in comparison with all other baselines. It demonstrates the advantages of using DeepMoji representation to extract contextual information from online content. The reason is that Deep-Moji considers fine-grained emoji categories, which capture different levels of emotional feelings (e.g., , , and show different levels of anger). Such information helps the model to determine the tone of language more precisely. In Section 5.3, we provide a more detailed analysis of the DeepMoji model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Classification Results", "sec_num": "5.2" }, { "text": "2. For Kaggle and Wikipedia data that are longer and more structured, fine-tuned BERT (Bodapati'19) is the winner. However, the results reported by this model are not significantly better than our best performing approaches (i.e., BERT Baseline + DeepMoji for Kaggle, and BiLSTM + RA for Wikipedia). It should be noted that unlike Bodapati'19, we do not fine-tune BERT (fine-tuning BERT is computationally expensive, especially on large corpora like Wikipedia), which is a good achievement.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Classification Results", "sec_num": "5.2" }, { "text": "3. There are major differences between the performances of different models across the various datasets that we use. This observation shows that it is very challenging to build a model that works well in different domains. It also confirms the need to collect more data from a variety of social media platforms.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Classification Results", "sec_num": "5.2" }, { "text": "To show why emoji representations are helpful to detect the abusive language in social media, we plot the emoji distribution over the neutral and offensive classes for the Curious Cat training data (Figure 4 ). For creating this plot, we use the average DeepMoji vector extracted for each instance. This vector shows the relevance of each emoji to a specific comment. We create the overall emoji vector per class by averaging the emoji vectors extracted for all of the instances of the same class. Finally, we select 19 out of the 64 emojis used in the DeepMoji project to create the final plot. As it is shown in Figure 4 , there are different patterns visible for the neutral and offensive classes. This observation validates our hypothesis on why it is useful to incorporate emoji information into the model. Based on Figure 4 , angry emojis ( , , ) are highly correlated with the offensive class, inversely happy and love faces ( , , ) appeared more frequently in the neutral class. For the happy and love faces, and , the differences between offensive and neutral classes are much less. We believe that this represents the scenarios where a defender (a user who defends the victim of online attacks) tries to support an attacked user by complimenting him/her, while expressing their hatred towards the attackers. Sad faces ( , , , , ) are more frequent in neutral instances than offensive ones. It possibly shows the cases where a user expresses his/her unhappiness in response to an attack. Interestingly, the laughing face, , shows a higher probability for the negative class. This can be linked to the scenario where someone attempts to bully a user by mocking him/her. Additionally, the plot shows exactly the same probabilities for the poker face ( ) over the offensive and neutral classes. So, we can conclude that this emoji does not convey any additional information related to offensive language. Other emojis ( , , , and ) that indicate the violent and threatening behavior towards the receiver also seem to appear in the offensive class frequently.", "cite_spans": [ { "start": 1328, "end": 1339, "text": "( , , , , )", "ref_id": null }, { "start": 1924, "end": 1935, "text": "( , , , and", "ref_id": null } ], "ref_spans": [ { "start": 198, "end": 207, "text": "(Figure 4", "ref_id": "FIGREF3" }, { "start": 614, "end": 622, "text": "Figure 4", "ref_id": "FIGREF3" }, { "start": 821, "end": 829, "text": "Figure 4", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "Why Does DeepMoji Work?", "sec_num": "5.3" }, { "text": "In this paper, we create a new resource for the task of abusive language detection that does not focus on specific list of bad words. We also propose two different approaches for incorporating emotion information into textual representation by pre-senting end-to-end deep neural models that show very promising results across three existing corpora, and our new corpus for abusive language detection. Based on the results, adding emotion information to the model can improve the performance, especially for short and noisy textual data. As for the future work, due to the fact that perceived level of aggression is very subjective to the user, we plan to jointly model the question and answer within a pair for the Curious Cat and ask.fm data. We believe that the reply that the user provides in response to a received question/comment is a strong indicator whether it was offensive or neutral towards the user. Another possible path in order to move the research forward, is to expand this task to the detection of cyberbullying incidents which has also become a growing concern in online communities.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": "6" }, { "text": "https://curiouscat.me", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We use the code available in https://github.com/ NiloofarSafi/Detecting-Nastiness 4 https://ask.fm", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "MethodologyEmojis help online users to better express their feelings within the text. With this notion, we hypothesize that emojis are effective tools to provide additional context for online comments, resulting in bet-6 https://www.kaggle.com/c/ detecting-insults-in-social-commentary 7 https://ritual.uh.edu/ curious-cat-corpus/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The code for this model is available through the competition discussion page: https://www.kaggle.com/c/ detecting-insults-in-social-commentary/ leaderboard", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Suspended accounts: A source of tweets with disgust and anger emotions for augmenting hate speech data sample", "authors": [ { "first": "Wafa", "middle": [], "last": "Alorainy", "suffix": "" }, { "first": "Pete", "middle": [], "last": "Burnap", "suffix": "" }, { "first": "Han", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Amir", "middle": [], "last": "Javed", "suffix": "" }, { "first": "Matthew L", "middle": [], "last": "Williams", "suffix": "" } ], "year": 2018, "venue": "2018 International Conference on Machine Learning and Cybernetics (ICMLC)", "volume": "2", "issue": "", "pages": "581--586", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wafa Alorainy, Pete Burnap, Han Liu, Amir Javed, and Matthew L Williams. 2018. Suspended accounts: A source of tweets with disgust and anger emotions for augmenting hate speech data sample. In 2018 International Conference on Machine Learning and Cybernetics (ICMLC), volume 2, pages 581-586. IEEE.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Deep learning for hate speech detection in tweets", "authors": [ { "first": "Pinkesh", "middle": [], "last": "Badjatiya", "suffix": "" }, { "first": "Shashank", "middle": [], "last": "Gupta", "suffix": "" }, { "first": "Manish", "middle": [], "last": "Gupta", "suffix": "" }, { "first": "Vasudeva", "middle": [], "last": "Varma", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 26th International Conference on World Wide Web Companion", "volume": "", "issue": "", "pages": "759--760", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pinkesh Badjatiya, Shashank Gupta, Manish Gupta, and Vasudeva Varma. 2017. Deep learning for hate speech detection in tweets. In Proceedings of the 26th International Conference on World Wide Web Companion, pages 759-760. International World Wide Web Conferences Steering Committee.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Neural machine translation by jointly learning to align and translate", "authors": [ { "first": "Dzmitry", "middle": [], "last": "Bahdanau", "suffix": "" }, { "first": "Kyunghyun", "middle": [], "last": "Cho", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2015, "venue": "3rd International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2015. Neural machine translation by jointly learning to align and translate. In 3rd Inter- national Conference on Learning Representations, ICLR 2015.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Neural word decomposition models for abusive language detection", "authors": [ { "first": "Sravan", "middle": [], "last": "Bodapati", "suffix": "" }, { "first": "Spandana", "middle": [], "last": "Gella", "suffix": "" }, { "first": "Kasturi", "middle": [], "last": "Bhattacharjee", "suffix": "" }, { "first": "Yaser", "middle": [], "last": "Al-Onaizan", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the Third Workshop on Abusive Language Online", "volume": "", "issue": "", "pages": "135--145", "other_ids": { "DOI": [ "10.18653/v1/W19-3515" ] }, "num": null, "urls": [], "raw_text": "Sravan Bodapati, Spandana Gella, Kasturi Bhattachar- jee, and Yaser Al-Onaizan. 2019. Neural word de- composition models for abusive language detection. In Proceedings of the Third Workshop on Abusive Language Online, pages 135-145, Florence, Italy. Association for Computational Linguistics.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Comparing different supervised approaches to hate speech detection", "authors": [ { "first": "Michele", "middle": [], "last": "Corazza", "suffix": "" }, { "first": "Stefano", "middle": [], "last": "Menini", "suffix": "" }, { "first": "Pinar", "middle": [], "last": "Arslan", "suffix": "" }, { "first": "Rachele", "middle": [], "last": "Sprugnoli", "suffix": "" }, { "first": "Elena", "middle": [], "last": "Cabrio", "suffix": "" }, { "first": "Sara", "middle": [], "last": "Tonelli", "suffix": "" }, { "first": "Serena", "middle": [], "last": "Villata", "suffix": "" }, { "first": "Fondazione Bruno", "middle": [], "last": "Kessler", "suffix": "" } ], "year": 2018, "venue": "EVALITA Evaluation of NLP and Speech Tools for Italian", "volume": "12", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michele Corazza, Stefano Menini, Pinar Arslan, Rachele Sprugnoli, Elena Cabrio, Sara Tonelli, Ser- ena Villata, and Fondazione Bruno Kessler. 2018. Comparing different supervised approaches to hate speech detection. EVALITA Evaluation of NLP and Speech Tools for Italian, 12:230.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Automated hate speech detection and the problem of offensive language", "authors": [ { "first": "Thomas", "middle": [], "last": "Davidson", "suffix": "" }, { "first": "Dana", "middle": [], "last": "Warmsley", "suffix": "" }, { "first": "Michael", "middle": [ "W" ], "last": "Macy", "suffix": "" }, { "first": "Ingmar", "middle": [], "last": "Weber", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the Eleventh International Conference on Web and Social Media, ICWSM 2017", "volume": "", "issue": "", "pages": "512--515", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thomas Davidson, Dana Warmsley, Michael W. Macy, and Ingmar Weber. 2017. Automated hate speech detection and the problem of offensive language. In Proceedings of the Eleventh International Con- ference on Web and Social Media, ICWSM 2017, Montr\u00e9al, Qu\u00e9bec, Canada, May 15-18, 2017, pages 512-515. AAAI Press.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Bert: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "4171--4186", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers), pages 4171-4186.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Common sense reasoning for detection, prevention, and mitigation of cyberbullying", "authors": [ { "first": "Karthik", "middle": [], "last": "Dinakar", "suffix": "" }, { "first": "Birago", "middle": [], "last": "Jones", "suffix": "" }, { "first": "Catherine", "middle": [], "last": "Havasi", "suffix": "" }, { "first": "Henry", "middle": [], "last": "Lieberman", "suffix": "" }, { "first": "Rosalind", "middle": [], "last": "Picard", "suffix": "" } ], "year": 2012, "venue": "ACM Transactions on Interactive Intelligent Systems (TiiS)", "volume": "2", "issue": "3", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Karthik Dinakar, Birago Jones, Catherine Havasi, Henry Lieberman, and Rosalind Picard. 2012. Com- mon sense reasoning for detection, prevention, and mitigation of cyberbullying. ACM Transactions on Interactive Intelligent Systems (TiiS), 2(3):18.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Using millions of emoji occurrences to learn any-domain representations for detecting sentiment, emotion and sarcasm", "authors": [ { "first": "Bjarke", "middle": [], "last": "Felbo", "suffix": "" }, { "first": "Alan", "middle": [], "last": "Mislove", "suffix": "" }, { "first": "Anders", "middle": [], "last": "S\u00f8gaard", "suffix": "" }, { "first": "Iyad", "middle": [], "last": "Rahwan", "suffix": "" }, { "first": "Sune", "middle": [], "last": "Lehmann", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1615--1625", "other_ids": { "DOI": [ "10.18653/v1/d17-1169" ] }, "num": null, "urls": [], "raw_text": "Bjarke Felbo, Alan Mislove, Anders S\u00f8gaard, Iyad Rahwan, and Sune Lehmann. 2017. Using mil- lions of emoji occurrences to learn any-domain rep- resentations for detecting sentiment, emotion and sarcasm. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, EMNLP 2017, Copenhagen, Denmark, September 9- 11, 2017, pages 1615-1625. Association for Compu- tational Linguistics.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Measuring nominal scale agreement among many raters", "authors": [ { "first": "L", "middle": [], "last": "Joseph", "suffix": "" }, { "first": "", "middle": [], "last": "Fleiss", "suffix": "" } ], "year": 1971, "venue": "Psychological bulletin", "volume": "76", "issue": "5", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joseph L Fleiss. 1971. Measuring nominal scale agree- ment among many raters. Psychological bulletin, 76(5):378.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Using convolutional neural networks to classify hatespeech", "authors": [ { "first": "Bj\u00f6rn", "middle": [], "last": "Gamb\u00e4ck", "suffix": "" }, { "first": "Utpal", "middle": [], "last": "Kumar Sikdar", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the first workshop on abusive language online", "volume": "", "issue": "", "pages": "85--90", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bj\u00f6rn Gamb\u00e4ck and Utpal Kumar Sikdar. 2017. Us- ing convolutional neural networks to classify hate- speech. In Proceedings of the first workshop on abu- sive language online, pages 85-90.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Detecting online hate speech using context aware models", "authors": [ { "first": "Lei", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Ruihong", "middle": [], "last": "Huang", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the International Conference Recent Advances in Natural Language Processing", "volume": "", "issue": "", "pages": "260--266", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lei Gao and Ruihong Huang. 2017. Detecting on- line hate speech using context aware models. In Proceedings of the International Conference Recent Advances in Natural Language Processing, RANLP 2017, pages 260-266.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "A lexicon-based approach for hate speech detection", "authors": [ { "first": "Njagi", "middle": [], "last": "Dennis Gitari", "suffix": "" }, { "first": "Zhang", "middle": [], "last": "Zuping", "suffix": "" }, { "first": "Hanyurwimfura", "middle": [], "last": "Damien", "suffix": "" }, { "first": "Jun", "middle": [], "last": "Long", "suffix": "" } ], "year": 2015, "venue": "International Journal of Multimedia and Ubiquitous Engineering", "volume": "10", "issue": "4", "pages": "215--230", "other_ids": {}, "num": null, "urls": [], "raw_text": "Njagi Dennis Gitari, Zhang Zuping, Hanyurwimfura Damien, and Jun Long. 2015. A lexicon-based approach for hate speech detection. International Journal of Multimedia and Ubiquitous Engineering, 10(4):215-230.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Online harassment in context: Trends from three youth internet safety surveys", "authors": [ { "first": "M", "middle": [], "last": "Lisa", "suffix": "" }, { "first": "Kimberly", "middle": [ "J" ], "last": "Jones", "suffix": "" }, { "first": "David", "middle": [], "last": "Mitchell", "suffix": "" }, { "first": "", "middle": [], "last": "Finkelhor", "suffix": "" } ], "year": 2000, "venue": "", "volume": "3", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lisa M Jones, Kimberly J Mitchell, and David Finkel- hor. 2013. Online harassment in context: Trends from three youth internet safety surveys (2000, 2005, 2010). Psychology of violence, 3(1):53.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Adam: A method for stochastic optimization", "authors": [ { "first": "P", "middle": [], "last": "Diederik", "suffix": "" }, { "first": "Jimmy", "middle": [], "last": "Kingma", "suffix": "" }, { "first": "", "middle": [], "last": "Ba", "suffix": "" } ], "year": 2015, "venue": "3rd International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In 3rd Inter- national Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Lexiconenhancement of embedding-based approaches towards the detection of abusive language", "authors": [ { "first": "Anna", "middle": [], "last": "Koufakou", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Scott", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the Second Workshop on Trolling, Aggression and Cyberbullying", "volume": "", "issue": "", "pages": "150--157", "other_ids": {}, "num": null, "urls": [], "raw_text": "Anna Koufakou and Jason Scott. 2020. Lexicon- enhancement of embedding-based approaches to- wards the detection of abusive language. In Proceed- ings of the Second Workshop on Trolling, Aggression and Cyberbullying, pages 150-157.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Hate speech classification in social media using emotional analysis", "authors": [ { "first": "Ricardo", "middle": [], "last": "Martins", "suffix": "" }, { "first": "Marco", "middle": [], "last": "Gomes", "suffix": "" }, { "first": "Jos\u00e9 Jo\u00e3o", "middle": [], "last": "Almeida", "suffix": "" }, { "first": "Paulo", "middle": [], "last": "Novais", "suffix": "" }, { "first": "Pedro", "middle": [], "last": "Henriques", "suffix": "" } ], "year": 2018, "venue": "7th Brazilian Conference on Intelligent Systems (BRACIS)", "volume": "", "issue": "", "pages": "61--66", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ricardo Martins, Marco Gomes, Jos\u00e9 Jo\u00e3o Almeida, Paulo Novais, and Pedro Henriques. 2018. Hate speech classification in social media using emotional analysis. In 2018 7th Brazilian Conference on Intel- ligent Systems (BRACIS), pages 61-66. IEEE.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Abusive language detection with graph convolutional networks", "authors": [ { "first": "Pushkar", "middle": [], "last": "Mishra", "suffix": "" }, { "first": "Marco", "middle": [ "Del" ], "last": "Tredici", "suffix": "" }, { "first": "Helen", "middle": [], "last": "Yannakoudakis", "suffix": "" }, { "first": "Ekaterina", "middle": [], "last": "Shutova", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019", "volume": "1", "issue": "", "pages": "2145--2150", "other_ids": { "DOI": [ "10.18653/v1/n19-1221" ] }, "num": null, "urls": [], "raw_text": "Pushkar Mishra, Marco Del Tredici, Helen Yan- nakoudakis, and Ekaterina Shutova. 2019a. Abu- sive language detection with graph convolutional networks. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Pa- pers), pages 2145-2150. Association for Computa- tional Linguistics.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Author profiling for hate speech detection", "authors": [ { "first": "Pushkar", "middle": [], "last": "Mishra", "suffix": "" }, { "first": "Marco", "middle": [ "Del" ], "last": "Tredici", "suffix": "" }, { "first": "Helen", "middle": [], "last": "Yannakoudakis", "suffix": "" }, { "first": "Ekaterina", "middle": [], "last": "Shutova", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pushkar Mishra, Marco Del Tredici, Helen Yan- nakoudakis, and Ekaterina Shutova. 2019b. Au- thor profiling for hate speech detection. CoRR, abs/1902.06734.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Tackling online abuse: A survey of automated abuse detection methods", "authors": [ { "first": "Pushkar", "middle": [], "last": "Mishra", "suffix": "" }, { "first": "Helen", "middle": [], "last": "Yannakoudakis", "suffix": "" }, { "first": "Ekaterina", "middle": [], "last": "Shutova", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pushkar Mishra, Helen Yannakoudakis, and Ekaterina Shutova. 2019c. Tackling online abuse: A sur- vey of automated abuse detection methods. CoRR, abs/1908.06024.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Abusive language detection in online user content", "authors": [ { "first": "Chikashi", "middle": [], "last": "Nobata", "suffix": "" }, { "first": "Joel", "middle": [], "last": "Tetreault", "suffix": "" }, { "first": "Achint", "middle": [], "last": "Thomas", "suffix": "" }, { "first": "Yashar", "middle": [], "last": "Mehdad", "suffix": "" }, { "first": "Yi", "middle": [], "last": "Chang", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 25th International Conference on World Wide Web, WWW '16", "volume": "", "issue": "", "pages": "145--153", "other_ids": { "DOI": [ "10.1145/2872427.2883062" ] }, "num": null, "urls": [], "raw_text": "Chikashi Nobata, Joel Tetreault, Achint Thomas, Yashar Mehdad, and Yi Chang. 2016. Abusive lan- guage detection in online user content. In Proceed- ings of the 25th International Conference on World Wide Web, WWW '16, pages 145-153, Republic and Canton of Geneva, Switzerland. International World Wide Web Conferences Steering Committee.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Gated multimodal units for information fusion", "authors": [ { "first": "Thamar", "middle": [], "last": "John Edison Arevalo Ovalle", "suffix": "" }, { "first": "Manuel", "middle": [], "last": "Solorio", "suffix": "" }, { "first": "Fabio", "middle": [ "A" ], "last": "Montes-Y-G\u00f3mez", "suffix": "" }, { "first": "", "middle": [], "last": "Gonz\u00e1lez", "suffix": "" } ], "year": 2017, "venue": "5th International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "John Edison Arevalo Ovalle, Thamar Solorio, Manuel Montes-y-G\u00f3mez, and Fabio A. Gonz\u00e1lez. 2017. Gated multimodal units for information fusion. In 5th International Conference on Learning Repre- sentations, ICLR 2017, Toulon, France, April 24- 26, 2017, Workshop Track Proceedings. OpenRe- view.net.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Joint modelling of emotion and abusive language detection", "authors": [ { "first": "Santhosh", "middle": [], "last": "Rajamanickam", "suffix": "" }, { "first": "Pushkar", "middle": [], "last": "Mishra", "suffix": "" }, { "first": "Helen", "middle": [], "last": "Yannakoudakis", "suffix": "" }, { "first": "Ekaterina", "middle": [], "last": "Shutova", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "4270--4279", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.394" ] }, "num": null, "urls": [], "raw_text": "Santhosh Rajamanickam, Pushkar Mishra, Helen Yan- nakoudakis, and Ekaterina Shutova. 2020. Joint modelling of emotion and abusive language detec- tion. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4270-4279, Online. Association for Compu- tational Linguistics.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Characterizing and detecting hateful users on twitter", "authors": [ { "first": "Pedro", "middle": [ "H" ], "last": "Manoel Horta Ribeiro", "suffix": "" }, { "first": "Yuri", "middle": [ "A" ], "last": "Calais", "suffix": "" }, { "first": "", "middle": [], "last": "Santos", "suffix": "" }, { "first": "A", "middle": [ "F" ], "last": "Virg\u00edlio", "suffix": "" }, { "first": "Wagner", "middle": [], "last": "Almeida", "suffix": "" }, { "first": "", "middle": [], "last": "Meira", "suffix": "" } ], "year": 2018, "venue": "Twelfth International AAAI Conference on Web and Social Media", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Manoel Horta Ribeiro, Pedro H Calais, Yuri A Santos, Virg\u00edlio AF Almeida, and Wagner Meira Jr. 2018. Characterizing and detecting hateful users on twit- ter. In Twelfth International AAAI Conference on Web and Social Media.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Detecting nastiness in social media", "authors": [ { "first": "Suraj", "middle": [], "last": "Niloofar Safi Samghabadi", "suffix": "" }, { "first": "Alan", "middle": [], "last": "Maharjan", "suffix": "" }, { "first": "Raquel", "middle": [], "last": "Sprague", "suffix": "" }, { "first": "Thamar", "middle": [], "last": "Diaz-Sprague", "suffix": "" }, { "first": "", "middle": [], "last": "Solorio", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the First Workshop on Abusive Language Online", "volume": "", "issue": "", "pages": "63--72", "other_ids": {}, "num": null, "urls": [], "raw_text": "Niloofar Safi Samghabadi, Suraj Maharjan, Alan Sprague, Raquel Diaz-Sprague, and Thamar Solorio. 2017. Detecting nastiness in social media. In Pro- ceedings of the First Workshop on Abusive Language Online, pages 63-72.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Ritual-uh at TRAC 2018 shared task: Aggression identification", "authors": [ { "first": "Deepthi", "middle": [], "last": "Niloofar Safi Samghabadi", "suffix": "" }, { "first": "Sudipta", "middle": [], "last": "Mave", "suffix": "" }, { "first": "Thamar", "middle": [], "last": "Kar", "suffix": "" }, { "first": "", "middle": [], "last": "Solorio", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the First Workshop on Trolling, Aggression and Cyberbullying, TRAC@COLING 2018", "volume": "", "issue": "", "pages": "12--18", "other_ids": {}, "num": null, "urls": [], "raw_text": "Niloofar Safi Samghabadi, Deepthi Mave, Sudipta Kar, and Thamar Solorio. 2018. Ritual-uh at TRAC 2018 shared task: Aggression identification. In Proceed- ings of the First Workshop on Trolling, Aggression and Cyberbullying, TRAC@COLING 2018, Santa Fe, New Mexico, USA, August 25, 2018, pages 12- 18. Association for Computational Linguistics.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "The risk of racial bias in hate speech detection", "authors": [ { "first": "Maarten", "middle": [], "last": "Sap", "suffix": "" }, { "first": "Dallas", "middle": [], "last": "Card", "suffix": "" }, { "first": "Saadia", "middle": [], "last": "Gabriel", "suffix": "" }, { "first": "Yejin", "middle": [], "last": "Choi", "suffix": "" }, { "first": "Noah A", "middle": [], "last": "Smith", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "1668--1678", "other_ids": {}, "num": null, "urls": [], "raw_text": "Maarten Sap, Dallas Card, Saadia Gabriel, Yejin Choi, and Noah A Smith. 2019. The risk of racial bias in hate speech detection. In Proceedings of the 57th Annual Meeting of the Association for Compu- tational Linguistics, pages 1668-1678.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "A survey on hate speech detection using natural language processing", "authors": [ { "first": "Anna", "middle": [], "last": "Schmidt", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Wiegand", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the Fifth International Workshop on Natural Language Processing for Social Media", "volume": "", "issue": "", "pages": "1--10", "other_ids": {}, "num": null, "urls": [], "raw_text": "Anna Schmidt and Michael Wiegand. 2017. A survey on hate speech detection using natural language pro- cessing. In Proceedings of the Fifth International Workshop on Natural Language Processing for So- cial Media, pages 1-10.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Guy De Pauw, Walter Daelemans, and V\u00e9ronique Hoste. 2015. Detection and fine-grained classification of cyberbullying events", "authors": [ { "first": "Cynthia", "middle": [], "last": "Van Hee", "suffix": "" }, { "first": "Els", "middle": [], "last": "Lefever", "suffix": "" }, { "first": "Ben", "middle": [], "last": "Verhoeven", "suffix": "" }, { "first": "Julie", "middle": [], "last": "Mennes", "suffix": "" }, { "first": "Bart", "middle": [], "last": "Desmet", "suffix": "" } ], "year": null, "venue": "International Conference Recent Advances in Natural Language Processing (RANLP)", "volume": "", "issue": "", "pages": "672--680", "other_ids": {}, "num": null, "urls": [], "raw_text": "Cynthia Van Hee, Els Lefever, Ben Verhoeven, Julie Mennes, Bart Desmet, Guy De Pauw, Walter Daele- mans, and V\u00e9ronique Hoste. 2015. Detection and fine-grained classification of cyberbullying events. In International Conference Recent Advances in Nat- ural Language Processing (RANLP), pages 672- 680.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Challenges and frontiers in abusive content detection", "authors": [ { "first": "Bertie", "middle": [], "last": "Vidgen", "suffix": "" }, { "first": "Alex", "middle": [], "last": "Harris", "suffix": "" }, { "first": "Dong", "middle": [], "last": "Nguyen", "suffix": "" }, { "first": "Rebekah", "middle": [], "last": "Tromble", "suffix": "" }, { "first": "Scott", "middle": [], "last": "Hale", "suffix": "" }, { "first": "Helen", "middle": [], "last": "Margetts", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the Third Workshop on Abusive Language Online", "volume": "", "issue": "", "pages": "80--93", "other_ids": { "DOI": [ "10.18653/v1/W19-3509" ] }, "num": null, "urls": [], "raw_text": "Bertie Vidgen, Alex Harris, Dong Nguyen, Rebekah Tromble, Scott Hale, and Helen Margetts. 2019. Challenges and frontiers in abusive content detec- tion. In Proceedings of the Third Workshop on Abu- sive Language Online, pages 80-93, Florence, Italy. Association for Computational Linguistics.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Inducing a lexicon of abusive words -a feature-based approach", "authors": [ { "first": "Michael", "middle": [], "last": "Wiegand", "suffix": "" }, { "first": "Josef", "middle": [], "last": "Ruppenhofer", "suffix": "" }, { "first": "Anna", "middle": [], "last": "Schmidt", "suffix": "" }, { "first": "Clayton", "middle": [], "last": "Greenberg", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "1046--1056", "other_ids": { "DOI": [ "10.18653/v1/N18-1095" ] }, "num": null, "urls": [], "raw_text": "Michael Wiegand, Josef Ruppenhofer, Anna Schmidt, and Clayton Greenberg. 2018. Inducing a lexicon of abusive words -a feature-based approach. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1046-1056, New Orleans, Louisiana. Association for Computational Linguistics.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Ex machina: Personal attacks seen at scale", "authors": [ { "first": "Ellery", "middle": [], "last": "Wulczyn", "suffix": "" }, { "first": "Nithum", "middle": [], "last": "Thain", "suffix": "" }, { "first": "Lucas", "middle": [], "last": "Dixon", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 26th International Conference on World Wide Web", "volume": "", "issue": "", "pages": "1391--1399", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ellery Wulczyn, Nithum Thain, and Lucas Dixon. 2017. Ex machina: Personal attacks seen at scale. In Pro- ceedings of the 26th International Conference on World Wide Web, pages 1391-1399. International World Wide Web Conferences Steering Committee.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Youth engaging in online harassment: Associations with caregiver-child relationships, internet use, and personal characteristics", "authors": [ { "first": "L", "middle": [], "last": "Michele", "suffix": "" }, { "first": "Kimberly", "middle": [ "J" ], "last": "Ybarra", "suffix": "" }, { "first": "", "middle": [], "last": "Mitchell", "suffix": "" } ], "year": 2004, "venue": "Journal of adolescence", "volume": "27", "issue": "3", "pages": "319--336", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michele L Ybarra and Kimberly J Mitchell. 2004. Youth engaging in online harassment: Associations with caregiver-child relationships, internet use, and personal characteristics. Journal of adolescence, 27(3):319-336.", "links": null } }, "ref_entries": { "FIGREF0": { "uris": null, "text": "Complete agreement for questions and answers across negative/offensive and positive labeled data.", "num": null, "type_str": "figure" }, "FIGREF1": { "uris": null, "text": "Top 5 emojis that the DeepMoji model assigned to one neutral and one offensive instances from our Curious Cat data. The words are colored based on the attention weights given by the DeepMoji model. Darker colors show higher attention weights.", "num": null, "type_str": "figure" }, "FIGREF2": { "uris": null, "text": "Overall architecture of the Gated Emotion-Aware Attention (GEA) model.", "num": null, "type_str": "figure" }, "FIGREF3": { "uris": null, "text": "Emoji distribution over Curious Cat data.", "num": null, "type_str": "figure" }, "TABREF1": { "text": "Curious Cat data distribution.", "content": "", "html": null, "num": null, "type_str": "table" }, "TABREF3": { "text": "Data comparison. The last column shows the average length of the posts with respect to the number of words.", "content": "
", "html": null, "num": null, "type_str": "table" }, "TABREF6": { "text": "Comparison between RA and GEA attention models. The starred results show significant improvement compared to the opposite model.", "content": "
", "html": null, "num": null, "type_str": "table" }, "TABREF8": { "text": "Classification results in terms of F1-score for the negative/offensive class and weighted F1. +DeepMoji refers to the experiments in which we directly concatenated DeepMoji vectors with the last hidden representation generated by the model.", "content": "
architectures. This model significantly outperforms
the Kaggle Winner results as well.
For Wikipedia,
", "html": null, "num": null, "type_str": "table" } } } }