{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T02:11:55.154037Z" }, "title": "On Cross-Dataset Generalization in Automatic Detection of Online Abuse", "authors": [ { "first": "Isar", "middle": [], "last": "Nejadgholi", "suffix": "", "affiliation": { "laboratory": "", "institution": "National Research Council", "location": { "country": "Canada" } }, "email": "isar.nejadgholi@nrc-cnrc.gc.ca" }, { "first": "Svetlana", "middle": [], "last": "Kiritchenko", "suffix": "", "affiliation": { "laboratory": "", "institution": "National Research Council", "location": { "country": "Canada" } }, "email": "svetlana.kiritchenko@nrc-cnrc.gc.ca" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "NLP research has attained high performances in abusive language detection as a supervised classification task. While in research settings, training and test datasets are usually obtained from similar data samples, in practice systems are often applied on data that are different from the training set in topic and class distributions. Also, the ambiguity in class definitions inherited in this task aggravates the discrepancies between source and target datasets. We explore the topic bias and the task formulation bias in cross-dataset generalization. We show that the benign examples in the Wikipedia Detox dataset are biased towards platformspecific topics. We identify these examples using unsupervised topic modeling and manual inspection of topics' keywords. Removing these topics increases cross-dataset generalization, without reducing in-domain classification performance. For a robust dataset design, we suggest applying inexpensive unsupervised methods to inspect the collected data and downsize the non-generalizable content before manually annotating for class labels.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "NLP research has attained high performances in abusive language detection as a supervised classification task. While in research settings, training and test datasets are usually obtained from similar data samples, in practice systems are often applied on data that are different from the training set in topic and class distributions. Also, the ambiguity in class definitions inherited in this task aggravates the discrepancies between source and target datasets. We explore the topic bias and the task formulation bias in cross-dataset generalization. We show that the benign examples in the Wikipedia Detox dataset are biased towards platformspecific topics. We identify these examples using unsupervised topic modeling and manual inspection of topics' keywords. Removing these topics increases cross-dataset generalization, without reducing in-domain classification performance. For a robust dataset design, we suggest applying inexpensive unsupervised methods to inspect the collected data and downsize the non-generalizable content before manually annotating for class labels.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "The NLP research community has devoted significant efforts to support the safety and inclusiveness of online discussion forums by developing automatic systems to detect hurtful, derogatory or obscene utterances. Most of these systems are based on supervised machine learning techniques, and require annotated data. Several publicly available datasets have been created for the task (Mishra et al., 2019; Vidgen and Derczynski, 2020) . However, due to the ambiguities in the task definition and complexities of data collection, cross-dataset generalizability remains a challenging and understudied issue of online abuse detection.", "cite_spans": [ { "start": 382, "end": 403, "text": "(Mishra et al., 2019;", "ref_id": "BIBREF14" }, { "start": 404, "end": 432, "text": "Vidgen and Derczynski, 2020)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Existing datasets differ in the considered types of offensive behaviour and annotation schemes, data sources and data collection methods. There is no agreed-upon definition of harmful online behaviour yet. Several terms have been used to refer to the general concept of harmful online behavior, including toxicity (Hosseini et al., 2017) , hate speech (Schmidt and Wiegand, 2017) , offensive (Zampieri et al., 2019) and abusive language (Waseem et al., 2017; Vidgen et al., 2019a) . Still, in practice, every dataset only focuses on a narrow range of subtypes of such behaviours and a single online platform (Jurgens et al., 2019) . For example, annotated tweets for three categories, Racist, Offensive but not Racist and Clean, and Nobata et al. (2016) collected discussions from Yahoo! Finance news and applied a binary annotation scheme of Abusive versus Clean. Further, since pure random sampling usually results in small proportions of offensive examples (Founta et al., 2018) , various sampling techniques are often employed. Zampieri et al. (2019) used words and phrases frequently found in offensive messages to search for potential abusive tweets. Founta et al. (2018) and Razavi et al. (2010) started from random sampling, then boosted the abusive part of the datasets using specific search procedures. Hosseinmardi et al. (2015) used snowballing to collect abusive posts on Instagram. Due to this variability in category definitions and data collection techniques, a system trained on a particular dataset is prone to overfitting to the specific characteristics of that dataset. As a result, although models tend to perform well in cross-validation evaluation on one dataset, the cross-dataset generalizability remains low (van Aken et al., 2018; Wiegand et al., 2019) .", "cite_spans": [ { "start": 314, "end": 337, "text": "(Hosseini et al., 2017)", "ref_id": "BIBREF9" }, { "start": 352, "end": 379, "text": "(Schmidt and Wiegand, 2017)", "ref_id": "BIBREF20" }, { "start": 392, "end": 415, "text": "(Zampieri et al., 2019)", "ref_id": "BIBREF30" }, { "start": 437, "end": 458, "text": "(Waseem et al., 2017;", "ref_id": "BIBREF25" }, { "start": 459, "end": 480, "text": "Vidgen et al., 2019a)", "ref_id": "BIBREF23" }, { "start": 608, "end": 630, "text": "(Jurgens et al., 2019)", "ref_id": "BIBREF12" }, { "start": 733, "end": 753, "text": "Nobata et al. (2016)", "ref_id": "BIBREF15" }, { "start": 960, "end": 981, "text": "(Founta et al., 2018)", "ref_id": "BIBREF6" }, { "start": 1032, "end": 1054, "text": "Zampieri et al. (2019)", "ref_id": "BIBREF30" }, { "start": 1157, "end": 1177, "text": "Founta et al. (2018)", "ref_id": "BIBREF6" }, { "start": 1182, "end": 1202, "text": "Razavi et al. (2010)", "ref_id": "BIBREF16" }, { "start": 1313, "end": 1339, "text": "Hosseinmardi et al. (2015)", "ref_id": "BIBREF10" }, { "start": 1734, "end": 1757, "text": "(van Aken et al., 2018;", "ref_id": "BIBREF0" }, { "start": 1758, "end": 1779, "text": "Wiegand et al., 2019)", "ref_id": "BIBREF27" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this work, we investigate the impact of two types of biases originating from source data that can emerge in a cross-domain application of models: 1) task formulation bias (discrepancy in class definitions and annotation between the training and test sets) and 2) selection bias (discrepancy in the topic and class distributions between the training and test sets). Further, we suggest topicbased dataset pruning as a method of mitigating selection bias to increase generalizability. This approach is different from domain adaptation techniques based on data selection (Ruder and Plank, 2017; Liu et al., 2019) in that we apply an unsupervised topic modeling method for topic discovery without using the class labels. We show that some topics are more generalizable than others. The topics that are specific to the training dataset lead to overfitting and, therefore, lower generalizability. Excluding or down-sampling instances associated with such topics before the expensive annotation step can substantially reduce the annotation costs.", "cite_spans": [ { "start": 571, "end": 594, "text": "(Ruder and Plank, 2017;", "ref_id": "BIBREF19" }, { "start": 595, "end": 612, "text": "Liu et al., 2019)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We focus on the Wikipedia Detox or Wikidataset, (an extension of the dataset by Wulczyn et al. (2017) ), collected from English Wikipedia talk pages and annotated for toxicity. To explore the generalizability of the models trained on this dataset, we create an out-of-domain test set comprising various types of abusive behaviours by combining two existing datasets, namely Waseemdataset (Waseem and Hovy, 2016) and Fountadataset (Founta et al., 2018) , both collected from Twitter.", "cite_spans": [ { "start": 80, "end": 101, "text": "Wulczyn et al. (2017)", "ref_id": "BIBREF29" }, { "start": 388, "end": 411, "text": "(Waseem and Hovy, 2016)", "ref_id": "BIBREF26" }, { "start": 416, "end": 451, "text": "Fountadataset (Founta et al., 2018)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our main contributions are as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 We identify topics included in the Wiki-dataset and manually examine keywords associated with the topics to heuristically determine topics' generalizability and their potential association with toxicity.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 We assess the generalizability of the task formulations by training a classifier to detect the Toxic class in the Wiki-dataset and testing it on an outof-domain dataset comprising various types of offensive behaviours. We find that Wiki-Toxic is most generalizable to Founta-Abusive and least generalizable to Waseem-Sexism.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 We show that re-sampling techniques result in a trade-off between the True Positive and True Negative rates on the out-of-domain test set. This trade-off is mainly governed by the ratio of toxic to normal instances and not the size of the dataset.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 We investigate the impact of topic distribution on generalizability and show that general and identity-related topics are more generalizable than platform-specific topics.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 We show that excluding Wikipedia-specific data instances (54% of the dataset) does not affect the results of in-domain classification, and improves both True Positive and True Negative rates on the out-of-domain test set, unlike re-sampling methods. Through unsupervised topic modeling, such topics can be identified and excluded before annotation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We focus on two types of biases originated from source data: task formulation and selection bias. Task formulation bias: In commercial applications, the definitions of offensive language heavily rely on community norms and context and, therefore, are imprecise, application-dependent, and constantly evolving (Chandrasekharan et al., 2018) . Similarly in NLP research, despite having clear overlaps, offensive class definitions vary significantly from one study to another. For example, the Toxic class in the Wiki-dataset refers to aggressive or disrespectful utterances that would likely make participants leave the discussion. This definition of toxic language includes some aspects of racism, sexism and hateful behaviour. Still, as highlighted by Vidgen et al. (2019a) , identity-based abuse is fundamentally different from general toxic behavior. Therefore, the Toxic class definition used in the Wiki-dataset differs in its scope from the abuserelated categories as defined in the Waseem-dataset and Founta-dataset. Wiegand et al. (2019) converted various category sets to binary (offensive vs. normal) and demonstrated that a system trained on one dataset can identify other forms of abuse to some extent. We use the same methodology and examine different offensive categories in outof-domain test sets to explore the deviation in a system's performance caused by the differences in the task definitions. Regardless of the task formulation, abusive language can be divided into explicit and implicit (Waseem et al., 2017) . Explicit abuse refers to utterances that include obscene and offensive expressions, such as stupid or scum, even though not all utterances that include obscene expressions are considered abusive in all contexts. Implicit abuse refers to more subtle harmful behaviours, such as stereotyping and micro-aggression. Explicit abuse is usually easier to detect by human annotators and automatic systems. Also, explicit abuse is more transferable between datasets as it is part of many definitions of online abuse, including personal attacks, hate speech, and identity-based abuse. The exact definition of implicit abuse, on the other hand, can substantially vary between task formulations as it is much dependent on the context, the author and the receiver of an utterance (Wiegand et al., 2019) .", "cite_spans": [ { "start": 309, "end": 339, "text": "(Chandrasekharan et al., 2018)", "ref_id": "BIBREF3" }, { "start": 752, "end": 773, "text": "Vidgen et al. (2019a)", "ref_id": "BIBREF23" }, { "start": 1023, "end": 1044, "text": "Wiegand et al. (2019)", "ref_id": "BIBREF27" }, { "start": 1508, "end": 1529, "text": "(Waseem et al., 2017)", "ref_id": "BIBREF25" }, { "start": 2299, "end": 2321, "text": "(Wiegand et al., 2019)", "ref_id": "BIBREF27" } ], "ref_spans": [], "eq_spans": [], "section": "Biases Originating from Source Data", "sec_num": "2" }, { "text": "Selection bias: Selection (or sampling) bias emerge when source data, on which the model is trained, is not representative of target data, on which the model is applied (Shah et al., 2020). We focus on two data characteristics affecting selection bias: topic distribution and class distribution.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Biases Originating from Source Data", "sec_num": "2" }, { "text": "In practice, every dataset covers a limited number of topics, and the topic distributions depend on many factors, including the source of data, the search mechanism and the timing of the data collection. For example, our source dataset, Wikidataset, consists of Wikipedia talk pages dating from 2004-2015. On the other hand, one of the sources of our target dataset, Waseem-dataset, consists of tweets collected using terms and references to specific entities that frequently occur in tweets expressing hate speech. As a result of its sampling strategy, Waseem-dataset includes many tweets on the topic of 'women in sports'. Wiegand et al. (2019) showed that different data sampling methods result in various distributions of topics, which affects the generalizability of trained classifiers, especially in the case of implicit abuse detection. Unlike explicit abuse, implicitly abusive behaviour comes in a variety of semantic and syntactic forms. To train a generalizable classifier, one requires a training dataset that covers a broad range of topics, each with a good representation of offensive examples. We continue this line of work and investigate the impact of topic bias on cross-dataset generalizability by identifying and changing the distribution of topics in controlled experiments.", "cite_spans": [ { "start": 625, "end": 646, "text": "Wiegand et al. (2019)", "ref_id": "BIBREF27" } ], "ref_spans": [], "eq_spans": [], "section": "Biases Originating from Source Data", "sec_num": "2" }, { "text": "The amount of online abuse on mainstream platforms varies greatly but is always very low. Founta et al. (2018) found that abusive tweets form 0.1% to 3% of randomly collected datasets. Vidgen et al. (2019b) showed that depending on the platform the prevalence of abusive language can range between 0.001% and 8%. Despite various data sampling strategies aimed at increasing the proportion of offensive instances, the class imbalance (the difference in class sizes) in available datasets is often severe. When trained on highly imbalanced data, most statistical machine learning methods exhibit a bias towards the majority class, and their performance on a minority class, usually the class of interest, suffers. A number of techniques have been proposed to address class imbalance in data, including data re-sampling, cost-sensitive learning, and neural network specific learning algorithms (Branco et al., 2016; Haixiang et al., 2017; Johnson and Khoshgoftaar, 2019) . In practice, simple re-sampling techniques, such as down-sampling of over-represented classes, often improve the overall performance of the classifier (Johnson and Khoshgoftaar, 2019) . However, re-sampling techniques might lead to overfitting to one of the classes causing a trade-off between True Positive and True Negative rates. When aggregated in an averaged metric such as F-score, this trade-off is usually overlooked.", "cite_spans": [ { "start": 104, "end": 110, "text": "(2018)", "ref_id": null }, { "start": 185, "end": 206, "text": "Vidgen et al. (2019b)", "ref_id": "BIBREF24" }, { "start": 891, "end": 912, "text": "(Branco et al., 2016;", "ref_id": "BIBREF2" }, { "start": 913, "end": 935, "text": "Haixiang et al., 2017;", "ref_id": "BIBREF7" }, { "start": 936, "end": 967, "text": "Johnson and Khoshgoftaar, 2019)", "ref_id": "BIBREF11" }, { "start": 1121, "end": 1153, "text": "(Johnson and Khoshgoftaar, 2019)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Biases Originating from Source Data", "sec_num": "2" }, { "text": "We exploit three large-scale, publicly available English datasets frequently used for the task of online abuse detection. Our main dataset, Wiki-dataset (Wulczyn et al., 2017) , is used as a training set. The out-of-domain test set is obtained by combining the other two datasets, Founta-dataset (Founta et al., 2018) and Waseem-dataset (Waseem and Hovy, 2016) .", "cite_spans": [ { "start": 153, "end": 175, "text": "(Wulczyn et al., 2017)", "ref_id": "BIBREF29" }, { "start": 337, "end": 360, "text": "(Waseem and Hovy, 2016)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Datasets", "sec_num": "3" }, { "text": "Training set: The Wiki-dataset includes 160K comments collected from English Wikipedia discussions and annotated for Toxic and Normal, through crowd-sourcing 1 . Every comment is annotated by 10 workers, and the final label is obtained through majority voting. The class Toxic comprises rude, hateful, aggressive, disrespectful or unreasonable comments that are likely to make a person leave a conversation 2 . The dataset consists of randomly collected comments and comments made by users blocked for violating Wikipedia's policies to augment the proportion of toxic texts. This dataset contains 15,362 instances of Toxic and 144,324 Normal texts.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Datasets", "sec_num": "3" }, { "text": "Out-of-Domain test set: The toxic portion of our test set is composed of four types of offensive language: Abusive and Hateful from the Fountadataset, and Sexist and Racist from the Waseemdataset. For the benign examples of our test set, we use the Normal class of the Founta-dataset.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Datasets", "sec_num": "3" }, { "text": "The Founta-dataset is a collection of 80K tweets crowd-annotated for four classes: Abusive, Hateful, Spam and Normal. The data is randomly sampled and then boosted with tweets that are likely to belong to one or more of the minority classes by deploying an iterative data exploration technique. The Abusive class is defined as content with any strongly impolite, rude or hurtful language that shows a debasement of someone or something, or shows intense emotions. The Hateful class refers to tweets that express hatred towards a targeted individual or group, or are intended to be derogatory, to humiliate, or to insult members of a group, on the basis of attributes such as race, religion, ethnic origin, sexual orientation, disability, or gender. Spam refers to posts consisted of advertising/marketing, posts selling products of adult nature, links to malicious websites, phishing attempts and other unwanted information, usually sent repeatedly. Tweets that do not fall in any of the prior classes are labelled as Normal (Founta et al., 2018). We do not include the Spam class in our test set as this category does not constitute offensive language, in general. The Founta-dataset contains 27,150 of Abusive, 4,965 of Hateful and 53,851 of Normal instances.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Datasets", "sec_num": "3" }, { "text": "The Waseem-dataset includes 16K manually annotated tweets, labeled as Sexist, Racist or Neither. The corpus is collected by searching for common slurs and terms pertaining to minority groups as well as identifying tweeters that use these terms frequently. A tweet is annotated as Racist or Sexist if it uses a racial or sexist slur, attacks, seeks to silence, unjustifiably criticizes or misrepresents a minority or defends xenophobia or sexism. Tweets that do not fall in these two classes are labeled as Neither (Waseem and Hovy, 2016) . The Neither class represents a mixture of benign and abusive (but not sexist or racist) instances, and, therefore, is excluded from our test set. Waseem-dataset contains 3,430 of Sexist and 1,976 of Racist tweets.", "cite_spans": [ { "start": 514, "end": 537, "text": "(Waseem and Hovy, 2016)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Datasets", "sec_num": "3" }, { "text": "We start by exploring the content of the Wikidataset through topic modeling. We train a topic model using the Online Latent Dirichlet Allocation (OLDA) algorithm (Hoffman et al., 2010) as implemented in the Gensim library (\u0158eh\u016f\u0159ek and Sojka, 2010) with the default parameters. Latent Dirichlet Allocation (LDA) (Blei et al., 2003) is a Baysian probabilistic model of a collection of texts. Each text is assumed to be generated from a multi- nomial distribution over a given number of topics, and each topic is represented as a multinomial distribution over the vocabulary. We pre-process the texts by lemmatizing the words and removing the stop words. To determine the optimal number of topics, we use a coherence measure that calculates the degree of semantic similarity among the top words (R\u00f6der et al., 2015) . Top words are defined as the most probable words to be seen conditioned on a topic. We experimented with a range of topic numbers between 10 and 30 and obtained the maximal average coherence with 20 topics. Each topic is represented by 10 top words. For simplicity, each text is assigned a single topic that has the highest probability. The full list of topics and their top words are available in the Appendix.", "cite_spans": [ { "start": 162, "end": 184, "text": "(Hoffman et al., 2010)", "ref_id": "BIBREF8" }, { "start": 311, "end": 330, "text": "(Blei et al., 2003)", "ref_id": "BIBREF1" }, { "start": 792, "end": 812, "text": "(R\u00f6der et al., 2015)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Topic Analysis of the Wiki-dataset", "sec_num": "4" }, { "text": "We group the 20 extracted topics into three categories based on the coherency of the top words and their potential association with offensive language. This is done through manual examination of the 10 top words in each topic. Table 1 shows five out of ten top words for each topic that are most representative of the assigned category.", "cite_spans": [], "ref_spans": [ { "start": 227, "end": 234, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Topic Analysis of the Wiki-dataset", "sec_num": "4" }, { "text": "The top words of two topics (topic 0 and topic 1) are general terms such as think, want, time, and life. This category forms 26% of the dataset. Since these topics appear incoherent, their association with offensiveness cannot be judged heuristically. Looking at the toxicity annotations we observe that 47% of the Toxic comments belong to these topics. These comments mostly convey personal insults, usually not tied to any identity group. The frequently used abusive terms in these Toxic comments include f*ck, stupid, idiot, *ss, etc.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Category 1: incoherent or mixture of general topics", "sec_num": null }, { "text": "Category 2: coherent, high association with offensive language", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Category 1: incoherent or mixture of general topics", "sec_num": null }, { "text": "Seven of the topics can be associated with offensive language; their top words represent profanity or are related to identity groups frequently subjected to abuse. Topic 14 is the most explicitly offensive topic; nine out of ten top words are associated with insult and hatred. 97% of the instances belonging to this topic are annotated as Toxic, with 96% of them containing explicitly toxic words. 3 These are generic profanities with the word f*ck being the most frequently used word.", "cite_spans": [ { "start": 399, "end": 400, "text": "3", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Category 1: incoherent or mixture of general topics", "sec_num": null }, { "text": "The top words of the other six topics (topics 2, 7, 8, 9, 12, and 16) include either offensive words or terms related to identity groups based on gender, ethnicity, or religion. On average, 16% of the comments assigned to these topics are labeled as Toxic. We manually analyzed these comments, and found that each topic (except topic 12) tends to concentrate around a specific identity group. Offensive comments in topic 2 mostly contain sexual slur and target female and homosexual users. In topic 7, comments often contain racial and ethnicity based abuse. Topic 8 contains physical threats, often targeting Muslims and Jewish folks (the words die and kill are the most frequently used content words in the offensive messages of this topic). Comments in topic 9 involve many terms associated with Christianity (e.g., god, christian, Jesus). Topic 16 has the least amount of comments (0.3% of the dataset), with the offensive messages mostly targeting gay people (the word gay appears in 67% of the offensive messages in this topic). Topic 12 is comprised of personal attacks in the context of Wikipedia admin-contributor relations. The most common offensive words in this topic include f*ck, stupid, troll, ignorant, hypocrite, etc. 20% of the whole dataset and 35% of the comments labeled as Toxic belong to this category.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Category 1: incoherent or mixture of general topics", "sec_num": null }, { "text": "3 Following Wiegand et al. (2019) , we estimate the proportion of explicitly offensive instances in a dataset as the proportion of abusive instances that contain at least one word from the lexicon of abusive words by Wiegand et al. (2018) . Category 3: coherent, low association with offensive language The remaining eleven topics include top words specific to Wikipedia and not directly associated with offensive language. For example, keywords of topic 4 are terms such as page, Wikipedia, edit and article, and only 0.4% of the 10,471 instances in this topic are labeled as Toxic. These eleven topics comprise 54% of the comments in the dataset and 18% of the Toxic comments.", "cite_spans": [ { "start": 12, "end": 33, "text": "Wiegand et al. (2019)", "ref_id": "BIBREF27" }, { "start": 217, "end": 238, "text": "Wiegand et al. (2018)", "ref_id": "BIBREF28" } ], "ref_spans": [], "eq_spans": [], "section": "Category 1: incoherent or mixture of general topics", "sec_num": null }, { "text": "We apply the LDA topic model trained on the Wikidataset as described in Section 4 to the Out-of-Domain test set. As before, each textual instance is assigned a single topic that has the highest probability. Table 2 summarizes the distribution of topics for all classes in the three datasets.", "cite_spans": [], "ref_spans": [ { "start": 207, "end": 214, "text": "Table 2", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Topic Distribution of the Test Set", "sec_num": "5" }, { "text": "Observe that Category 3 is the least represented category of topics across all classes, except for the Normal class in the Wiki-dataset. Specifically, there is a significant deviation in the topic distribution between the Wiki-Normal and the Founta-Normal classes. This deviation can be explained by the difference in data sources. Normal conversations on Twitter are more likely to be about general concepts covered in Category 1 or identity-related topics covered in Category 2 than the specific topics such as writing and editing in Category 3. Other than Waseem-Racist, which has 67% overlap with Category 2, all types of offensive behaviour in the three datasets have more overlap with the general topics (Category 1) than identity-related topics (Category 2). For example, for the Waseem-Sexist, 50% of instances fall under Category 1, 35% under Category 2 and 15% under Category 3. Topic 1, which is a mixture of general topics, is the dominant topic among the Waseem-Sexist tweets. Out of the topics in Category 2, most of the sexist tweets are matched to topic 2 (focused on sexism and homophobia) and topic 12 (general personal insults). Note that given the sizes of the positive and negative test classes, all other common metrics, such as various kinds of averaged F1-scores, can be calculated from the accuracies per class. In addition, we report macro-averaged F-score, weighted by the sizes of the negative and positive classes, to show the overall impact of the proposed method.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Topic Distribution of the Test Set", "sec_num": "5" }, { "text": "Results: The overall performance of the classifier on the Out-of-Domain test set is quite high: weighted macro-averaged F 1 = 0.90. However, when the test set is broken down into the 20 topics of the Wiki-dataset and the accuracy is measured within the topics, the results vary greatly. For ex-ample, for the instances that fall under topic 14, the explicitly offensive topic, the F1-score is 0.99. For topic 15, a Wikipedia-specific topic, the F1-score is 0.80. Table 3 shows the overall accuracies for each test class as well as the accuracies for each topic category (described in Section 4) within each class.", "cite_spans": [], "ref_spans": [ { "start": 463, "end": 470, "text": "Table 3", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Topic Distribution of the Test Set", "sec_num": "5" }, { "text": "For the class Founta-Abusive, the classifier achieves 94% accuracy. 12% of the Founta-Abusive tweets fall under the explicitly offensive topic (topic 14), and those tweets are classified with a 100% accuracy. The accuracy score is highest on Category 2 and lowest on Category 3. For the Founta-Hateful class, the classifier recognizes 62% of the tweets correctly. The accuracy score is highest on Category 1 and lowest on Category 3. 8% of the Founta-Hateful tweets fall under the explicitly offensive topic (topic 14), and are classified with a 99% accuracy. For the Founta-Normal class, the classifier recognizes 96% of the tweets correctly. Unlike the Founta-Abusive and Founta-Hateful class, for the Founta-Normal class, the highest accuracy is achieved on Category 3. 0.1% of the Founta-Normal tweets fall under the explicitly offensive topic, and only 26% of them are classified correctly.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Topic Distribution of the Test Set", "sec_num": "5" }, { "text": "The accuracy of the classifier on the Waseem-Sexist and Waseem-Racist classes is 0.26 and 0.35, respectively. This indicates that the Wiki-dataset, annotated for toxicity, is not well suited for detecting sexist or racist tweets. This observation could be explained by the fact that none of the coherent topics extracted from the Wiki-dataset is associated strongly with sexism or racism. Nevertheless, the tweets that fall under the explicit abuse topic (topic 14) are recognized with a 100% accuracy. Topic 8, which contains abuse mostly directed towards Jewish and Muslim people, is the most dominant topic in the Racist class (32% of the class) and the accuracy score on this topic is the highest, after the explicitly offensive topic. The Racist class overlaps the least with Category 3 (see Table 2 ), and the lowest accuracy score is obtained on this category. The definitions of the Toxic and Racist classes overlap mostly in general and identity-related abuse, therefore higher accuracy scores are obtained in Categories 1 and 2. Similar to Racist tweets, Sexist tweets have the least overlap and the lowest accuracy score on Category 3. The accuracy score is the highest on the explicitly offensive topic (100%) and varies substantially across other topics. ", "cite_spans": [], "ref_spans": [ { "start": 797, "end": 804, "text": "Table 2", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Topic Distribution of the Test Set", "sec_num": "5" }, { "text": "The generalizability of the classifier trained on the Wiki-dataset is affected by at least two factors: task formulation and topic distributions. The impact of task formulation: From task formulations described in Section 3, observe that the Wiki-dataset defines the class Toxic in a general way. The class Founta-Abusive is also a general formulation of offensive behaviour. The similarity of these two definitions is reflected clearly in our results. The classifier trained on the Wiki-dataset reaches 96% accuracy on the Founta-Abusive class. Unlike the Founta-Abusive class, the other three labels included in our analysis formulate a specific type of harassment against certain targets. Our topic analysis of the Wiki-dataset reveals that this dataset includes profanity and hateful content directed towards minority groups but the dataset is extremely unbalanced in covering these topics. Therefore, not only is the number of useful examples for learning these classes small, but the classification models do not learn these classes effectively because of the skewness of the training dataset. This observation is in line with the fact that the trained classifier detects some of the Waseem-Racist, Waseem-Sexist and Founta-Hateful tweets correctly, but overall performs poorly on these classes. The impact of topic distribution: Our analysis shows that independent of the class labels, for all the abuse-related test classes, the trained classifier performs worst when test examples fall under Category 3. Intuitively, this means that the platformspecific topics with low association with offensive language are least generalizable in terms of learn-ing offensive behaviour. Categories 1 and 2, which include a mixture of general and identity-related topics with high potential for offensiveness, have more commonalities across datasets.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "6.1" }, { "text": "Our goal is to measure the impact of various topics on generalization. However, modifying the topic distribution will impact the class distribution and data size. To control for this, we first analyze the impact of class distribution and data size on the classifier's performance. Then, we study the effect of topic distribution by limiting the training data to different topic categories. Impact of class distribution: The class distribution in the Wiki-dataset is fairly imbalanced; the ratio of the size of Wiki-Toxic to Wiki-Normal is 1:10. Class imbalance can lead to poor predictive performance on minority classes, as most of the learning algorithms are developed with the assumption of the balanced class distribution. To investigate the impact of the class distribution on generalization, we keep all the Wiki-Toxic instances and randomly sample the Wiki-Normal class to build the training sets with various ratios of toxic to normal instances. Figure 1 shows the classifier's accuracy on the test classes when trained on subsets with different class distributions. Observe that with the increase of the Wiki-Normal class size in the training dataset, the accuracy on all offensive test classes decreases while the accuracy on the Founta-Normal class increases. The classifier assigns more instances to the the Normal class resulting in a lower True Positive (accuracy on the offensive classes) and a higher True Negative (accuracy on the Normal class) rates. The drop in accuracy is significant for the Waseem-Sexist, Waseem-Racist and Waseem-Hateful classes and relatively minor for the Founta-Abusive class. Note that the impact of the class distribution is not reflected in the overall F1-score. The classifier trained on a balanced data subset (with class size ratio of 1:1) reaches 0.896 weighted-averaged F1-score, which is very close to the F1-score of 0.899 resulted from training on the full dataset with the 1:10 class size ratio. However, in practice, the designers of such systems need to decide on the preferred class distribution depending on the distribution of classes in the test environment and the significance of the consequences of the False Positive and False Negative outcomes. Impact of dataset size: To investigate the impact of the size of the training set, we fix the class ratio at 1:1 and compare the classifier's performance when trained on data subsets of different sizes. We randomly select subsets from the Wiki-dataset with sizes of 10K (5K Toxic and 5K Normal instances) and 30K (15K Toxic and 15K Normal instances). Each experiment is repeated 5 times, and the averaged results are presented in Figure 2 . The height of the box shows the standard deviation of accuracies. Observe that the average accuracies remain unchanged when the dataset's size triples at the same class balance ratio. This finding contrasts with the general assumption that more training data results in a higher classification performance. Impact of topics: In order to measure the impact of topics covered in the training dataset, we compare the classifier's performance when trained on only one of the three categories of topics described in Section 4. To control for the effect of class balance and dataset size, we run the experiments for two cases of toxic-to-normal ratios, 3K-3K and 3K-27K. Each experiment is repeated 5 times, and the average accuracy per class is reported in Figure 3 .", "cite_spans": [], "ref_spans": [ { "start": 954, "end": 962, "text": "Figure 1", "ref_id": "FIGREF0" }, { "start": 2641, "end": 2649, "text": "Figure 2", "ref_id": "FIGREF1" }, { "start": 3404, "end": 3412, "text": "Figure 3", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Impact of Data Size, Class and Topic Distribution on Generalizability", "sec_num": "7" }, { "text": "For both cases of class size ratios, shown in Figures 3a and 3b , we notice that the classifier trained on instances belonging to Category 3 reaches higher accuracies on the offensive classes, but a significantly lower accuracy on the Founta-Normal class. The benign part of Category 3 is overwhelmed by Wikipedia-specific examples. Therefore, utterances dissimilar to these topics are labelled as Toxic, leading to a high accuracy on the toxic classes and a low accuracy on the Normal class. This is an example of the negative impact of topic bias on the detection of offensive utterances.", "cite_spans": [], "ref_spans": [ { "start": 46, "end": 63, "text": "Figures 3a and 3b", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Impact of Data Size, Class and Topic Distribution on Generalizability", "sec_num": "7" }, { "text": "In contrast, the classifiers trained on Categories 1 and 2 perform comparably across test classes. The classifier trained on Category 2 is slightly more effective in recognizing Founta-Hateful utterances, especially when the training set is balanced. This observation can be explained by a better representation of identity-related hatred in Category 2.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Impact of Data Size, Class and Topic Distribution on Generalizability", "sec_num": "7" }, { "text": "We showed that a classifier trained on instances from Category 3 suffers a big loss in accuracy on the Normal class. Here, we investigate how the performance of a classifier trained on the full Wikidataset changes when the Category 3 instances (all or the benign part only) are removed from the training set. Table 4 shows the results. Observe that removing the domain-specific benign examples, referred to as 'excl. C3 Normal' in Table 4 , improves the accuracies for all classes. As demonstrated in the previous experiments, this improvement cannot be attributed to the changes in the class balance ratio or the size of the training set, as both these factors cause a trade-off between True Positive and True Negative rates. Removing the Wikipediaspecific topics from the Wiki-dataset mitigates the topic bias and leads to this improvement. Similarly, when all the instances of Category 3 are removed from the training set ('excl. C3 all' in Table 4 ), the accuracy does not suffer and actually slightly improves on all classes, except Waseem-Racist. This is despite the fact that the training set has 58% less instances in the Normal class and 18% less instances in the Toxic class. The overall weighted-averaged F1-score on the full Out-of-Domain test set also slightly improves when the instances of Category 3 are excluded from the training data (Table 5) . Removing all the instances of Category 3 is particularly interesting since it can be done only with inspection of topics and without using the class labels.", "cite_spans": [], "ref_spans": [ { "start": 309, "end": 316, "text": "Table 4", "ref_id": "TABREF7" }, { "start": 431, "end": 438, "text": "Table 4", "ref_id": "TABREF7" }, { "start": 944, "end": 951, "text": "Table 4", "ref_id": "TABREF7" }, { "start": 1352, "end": 1361, "text": "(Table 5)", "ref_id": null } ], "eq_spans": [], "section": "Removing Platform-Specific Instances from the Training Set", "sec_num": "8" }, { "text": "To assess the impact of removing Wikipediaspecific examples on in-domain classification, we train a model on the training set of the Wiki-dataset, with and without excluding Category 3 instances, and evaluate it on the full test set of the Wiki-dataset. We observe that the in-domain performance does not suffer from removing Category 3 from the training data (Table 5) .", "cite_spans": [], "ref_spans": [ { "start": 360, "end": 369, "text": "(Table 5)", "ref_id": null } ], "eq_spans": [], "section": "Removing Platform-Specific Instances from the Training Set", "sec_num": "8" }, { "text": "In the task of online abuse detection, both False Positive and False Negative errors can lead to significant harm as one threatens the freedom of speech and ruins people's reputations, and the other ignores hurtful behaviour. Although balancing the class sizes has been traditionally exploited when dealing with imbalanced datasets, we showed that balanced class sizes may lead to high misclassification of normal utterances while improving the True Positive rates. This trade-off is not necessarily reflected in aggregated evaluation metrics such as F1-score but has important implications in real-life applications. We suggest evaluating each class (both positive and negative) separately taking Table 5 : Weighted macro-averaged F1-score for a classifier trained on portions of the Wiki-dataset and evaluated on the in-domain and out-of-domain test sets.", "cite_spans": [], "ref_spans": [ { "start": 698, "end": 705, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "Discussion", "sec_num": "9" }, { "text": "into account the potential costs of different types of errors. Furthermore, our analysis reveals that for generalizability, the size of the dataset is not as important as the class and topic distributions. We analyzed the impact of the topics included in the Wiki-dataset and showed that mitigating the topic bias improves accuracy rates across all the out-of-domain positive and negative classes. Our results suggest that the sheer amount of normal comments included in the training datasets might not be necessary and can even be harmful for generalization if the topic distribution of normal topics is skewed. When the classifier is trained on Category 3 instances only (Figure 3) , the Normal class is attributed to the over-represented topics, leading to high misclassification of normal texts or high False Positive rates.", "cite_spans": [], "ref_spans": [ { "start": 673, "end": 683, "text": "(Figure 3)", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Discussion", "sec_num": "9" }, { "text": "In general, when collecting new datasets, texts can be inspected through topic modeling using simple heuristics (e.g., keep topics related to demographic groups often subjected to abuse) in an attempt to balance the distribution of various topics and possibly sub-sample over-represented and less generalizable topics (e.g., high volumes of messages related to an incident with a celebrity figure happened during the data collection time) before the expensive annotation step.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "9" }, { "text": "Our work highlights the importance of heuristic scrutinizing of topics in collected datasets before performing a laborious and expensive annotation. We suggest that unsupervised topic modeling and manual assessment of extracted topics can be used to mitigate the topic bias. In the case of the Wikidataset, we showed that more than half of the dataset can be safely removed without affecting either the in-domain or the out-of-domain performance. For future work, we recommend that topic analysis, augmentation of topics associated with offensive vocabulary and targeted demographics, and filtering of non-generalizable topics should be applied iteratively during data collection.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "10" }, { "text": "https://meta.wikimedia.org/wiki/ Research:Detox/Data_Release 2 https://github.com/ewulczyn/ wiki-detox/blob/master/src/modeling/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Challenges for toxic comment classification: An in-depth error analysis", "authors": [ { "first": "Julian", "middle": [], "last": "Betty Van Aken", "suffix": "" }, { "first": "Ralf", "middle": [], "last": "Risch", "suffix": "" }, { "first": "Alexander", "middle": [], "last": "Krestel", "suffix": "" }, { "first": "", "middle": [], "last": "L\u00f6ser", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2nd Workshop on Abusive Language Online", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Betty van Aken, Julian Risch, Ralf Krestel, and Alexan- der L\u00f6ser. 2018. Challenges for toxic comment clas- sification: An in-depth error analysis. In Proceed- ings of the 2nd Workshop on Abusive Language On- line.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Latent Dirichlet allocation", "authors": [ { "first": "David", "middle": [ "M" ], "last": "Blei", "suffix": "" }, { "first": "Andrew", "middle": [ "Y" ], "last": "Ng", "suffix": "" }, { "first": "Michael", "middle": [ "I" ], "last": "Jordan", "suffix": "" } ], "year": 2003, "venue": "Journal of Machine Learning Research", "volume": "3", "issue": "", "pages": "993--1022", "other_ids": {}, "num": null, "urls": [], "raw_text": "David M. Blei, Andrew Y. Ng, and Michael I. Jordan. 2003. Latent Dirichlet allocation. Journal of Ma- chine Learning Research, 3(Jan):993-1022.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "A survey of predictive modeling on imbalanced domains", "authors": [ { "first": "Paula", "middle": [], "last": "Branco", "suffix": "" }, { "first": "Lu\u00eds", "middle": [], "last": "Torgo", "suffix": "" }, { "first": "Rita", "middle": [ "P" ], "last": "Ribeiro", "suffix": "" } ], "year": 2016, "venue": "ACM Computing Surveys (CSUR)", "volume": "49", "issue": "2", "pages": "1--50", "other_ids": {}, "num": null, "urls": [], "raw_text": "Paula Branco, Lu\u00eds Torgo, and Rita P. Ribeiro. 2016. A survey of predictive modeling on imbalanced do- mains. ACM Computing Surveys (CSUR), 49(2):1- 50.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "The Internet's hidden rules: An empirical study of Reddit norm violations at micro, meso, and macro scales", "authors": [ { "first": "Eshwar", "middle": [], "last": "Chandrasekharan", "suffix": "" }, { "first": "Mattia", "middle": [], "last": "Samory", "suffix": "" }, { "first": "Shagun", "middle": [], "last": "Jhaver", "suffix": "" }, { "first": "Hunter", "middle": [], "last": "Charvat", "suffix": "" }, { "first": "Amy", "middle": [], "last": "Bruckman", "suffix": "" }, { "first": "Cliff", "middle": [], "last": "Lampe", "suffix": "" }, { "first": "Jacob", "middle": [], "last": "Eisenstein", "suffix": "" }, { "first": "Eric", "middle": [], "last": "Gilbert", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the ACM on Human-Computer Interaction", "volume": "2", "issue": "", "pages": "1--25", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eshwar Chandrasekharan, Mattia Samory, Shagun Jhaver, Hunter Charvat, Amy Bruckman, Cliff Lampe, Jacob Eisenstein, and Eric Gilbert. 2018. The Internet's hidden rules: An empirical study of Reddit norm violations at micro, meso, and macro scales. Proceedings of the ACM on Human- Computer Interaction, 2(CSCW):1-25.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Automated hate speech detection and the problem of offensive language", "authors": [ { "first": "Thomas", "middle": [], "last": "Davidson", "suffix": "" }, { "first": "Dana", "middle": [], "last": "Warmsley", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Macy", "suffix": "" }, { "first": "Ingmar", "middle": [], "last": "Weber", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the International AAAI Conference on Web and Social Media", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thomas Davidson, Dana Warmsley, Michael Macy, and Ingmar Weber. 2017. Automated hate speech detection and the problem of offensive language. In Proceedings of the International AAAI Conference on Web and Social Media.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "4171--4186", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4171-4186, Minneapolis, Min- nesota.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Large scale crowdsourcing and characterization of Twitter abusive behavior", "authors": [ { "first": "Constantinos", "middle": [], "last": "Antigoni Maria Founta", "suffix": "" }, { "first": "Despoina", "middle": [], "last": "Djouvas", "suffix": "" }, { "first": "Ilias", "middle": [], "last": "Chatzakou", "suffix": "" }, { "first": "Jeremy", "middle": [], "last": "Leontiadis", "suffix": "" }, { "first": "Gianluca", "middle": [], "last": "Blackburn", "suffix": "" }, { "first": "Athena", "middle": [], "last": "Stringhini", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Vakali", "suffix": "" }, { "first": "Nicolas", "middle": [], "last": "Sirivianos", "suffix": "" }, { "first": "", "middle": [], "last": "Kourtellis", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the International AAAI Conference on Web and Social Media", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Antigoni Maria Founta, Constantinos Djouvas, De- spoina Chatzakou, Ilias Leontiadis, Jeremy Black- burn, Gianluca Stringhini, Athena Vakali, Michael Sirivianos, and Nicolas Kourtellis. 2018. Large scale crowdsourcing and characterization of Twit- ter abusive behavior. In Proceedings of the Interna- tional AAAI Conference on Web and Social Media.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Learning from class-imbalanced data: Review of methods and applications", "authors": [ { "first": "Guo", "middle": [], "last": "Haixiang", "suffix": "" }, { "first": "Li", "middle": [], "last": "Yijing", "suffix": "" }, { "first": "Jennifer", "middle": [], "last": "Shang", "suffix": "" }, { "first": "Gu", "middle": [], "last": "Mingyun", "suffix": "" }, { "first": "Gong", "middle": [], "last": "Huang Yuanyue", "suffix": "" }, { "first": "", "middle": [], "last": "Bing", "suffix": "" } ], "year": 2017, "venue": "Expert Systems with Applications", "volume": "73", "issue": "", "pages": "220--239", "other_ids": {}, "num": null, "urls": [], "raw_text": "Guo Haixiang, Li Yijing, Jennifer Shang, Gu Mingyun, Huang Yuanyue, and Gong Bing. 2017. Learning from class-imbalanced data: Review of methods and applications. Expert Systems with Applications, 73:220-239.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Online learning for latent Dirichlet allocation", "authors": [ { "first": "Matthew", "middle": [], "last": "Hoffman", "suffix": "" }, { "first": "Francis", "middle": [ "R" ], "last": "Bach", "suffix": "" }, { "first": "David", "middle": [ "M" ], "last": "Blei", "suffix": "" } ], "year": 2010, "venue": "Advances in Neural Information Processing Systems 23", "volume": "", "issue": "", "pages": "856--864", "other_ids": {}, "num": null, "urls": [], "raw_text": "Matthew Hoffman, Francis R. Bach, and David M. Blei. 2010. Online learning for latent Dirichlet allocation. In J. D. Lafferty, C. K. I. Williams, J. Shawe-Taylor, R. S. Zemel, and A. Culotta, editors, Advances in Neural Information Processing Systems 23, pages 856-864. Curran Associates, Inc.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Deceiving Google's Perspective API built for detecting toxic comments", "authors": [ { "first": "Hossein", "middle": [], "last": "Hosseini", "suffix": "" }, { "first": "Sreeram", "middle": [], "last": "Kannan", "suffix": "" }, { "first": "Baosen", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Radha", "middle": [], "last": "Poovendran", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1702.08138" ] }, "num": null, "urls": [], "raw_text": "Hossein Hosseini, Sreeram Kannan, Baosen Zhang, and Radha Poovendran. 2017. Deceiving Google's Perspective API built for detecting toxic comments. arXiv preprint arXiv:1702.08138.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Analyzing labeled cyberbullying incidents on the Instagram social network", "authors": [ { "first": "Homa", "middle": [], "last": "Hosseinmardi", "suffix": "" }, { "first": "Sabrina", "middle": [ "Arredondo" ], "last": "Mattson", "suffix": "" }, { "first": "Rahat", "middle": [], "last": "Ibn Rafiq", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Han", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the International Conference on Social Informatics", "volume": "", "issue": "", "pages": "49--66", "other_ids": {}, "num": null, "urls": [], "raw_text": "Homa Hosseinmardi, Sabrina Arredondo Mattson, Ra- hat Ibn Rafiq, Richard Han, Qin Lv, and Shivakant Mishra. 2015. Analyzing labeled cyberbullying inci- dents on the Instagram social network. In Proceed- ings of the International Conference on Social Infor- matics, pages 49-66.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Survey on deep learning with class imbalance", "authors": [ { "first": "Justin", "middle": [ "M" ], "last": "Johnson", "suffix": "" }, { "first": "Taghi", "middle": [ "M" ], "last": "Khoshgoftaar", "suffix": "" } ], "year": 2019, "venue": "Journal of Big Data", "volume": "6", "issue": "1", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Justin M. Johnson and Taghi M. Khoshgoftaar. 2019. Survey on deep learning with class imbalance. Jour- nal of Big Data, 6(1):27.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "A just and comprehensive strategy for using NLP to address online abuse", "authors": [ { "first": "David", "middle": [], "last": "Jurgens", "suffix": "" }, { "first": "Libby", "middle": [], "last": "Hemphill", "suffix": "" }, { "first": "Eshwar", "middle": [], "last": "Chandrasekharan", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "3658--3666", "other_ids": {}, "num": null, "urls": [], "raw_text": "David Jurgens, Libby Hemphill, and Eshwar Chan- drasekharan. 2019. A just and comprehensive strat- egy for using NLP to address online abuse. In Pro- ceedings of the 57th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 3658- 3666, Florence, Italy.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Reinforced training data selection for domain adaptation", "authors": [ { "first": "Miaofeng", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Yan", "middle": [], "last": "Song", "suffix": "" }, { "first": "Hongbin", "middle": [], "last": "Zou", "suffix": "" }, { "first": "Tong", "middle": [], "last": "Zhang", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "1957--1968", "other_ids": {}, "num": null, "urls": [], "raw_text": "Miaofeng Liu, Yan Song, Hongbin Zou, and Tong Zhang. 2019. Reinforced training data selection for domain adaptation. In Proceedings of the 57th An- nual Meeting of the Association for Computational Linguistics, pages 1957-1968, Florence, Italy.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Tackling online abuse: A survey of automated abuse detection methods", "authors": [ { "first": "Pushkar", "middle": [], "last": "Mishra", "suffix": "" }, { "first": "Helen", "middle": [], "last": "Yannakoudakis", "suffix": "" }, { "first": "Ekaterina", "middle": [], "last": "Shutova", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1908.06024" ] }, "num": null, "urls": [], "raw_text": "Pushkar Mishra, Helen Yannakoudakis, and Ekaterina Shutova. 2019. Tackling online abuse: A survey of automated abuse detection methods. arXiv preprint arXiv:1908.06024.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Abusive language detection in online user content", "authors": [ { "first": "Chikashi", "middle": [], "last": "Nobata", "suffix": "" }, { "first": "Joel", "middle": [], "last": "Tetreault", "suffix": "" }, { "first": "Achint", "middle": [], "last": "Thomas", "suffix": "" }, { "first": "Yashar", "middle": [], "last": "Mehdad", "suffix": "" }, { "first": "Yi", "middle": [], "last": "Chang", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the International Conference on World Wide Web", "volume": "", "issue": "", "pages": "145--153", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chikashi Nobata, Joel Tetreault, Achint Thomas, Yashar Mehdad, and Yi Chang. 2016. Abusive lan- guage detection in online user content. In Proceed- ings of the International Conference on World Wide Web, pages 145-153.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Offensive language detection using multi-level classification", "authors": [ { "first": "Diana", "middle": [], "last": "Amir H Razavi", "suffix": "" }, { "first": "Sasha", "middle": [], "last": "Inkpen", "suffix": "" }, { "first": "Stan", "middle": [], "last": "Uritsky", "suffix": "" }, { "first": "", "middle": [], "last": "Matwin", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the Canadian Conference on Artificial Intelligence", "volume": "", "issue": "", "pages": "16--27", "other_ids": {}, "num": null, "urls": [], "raw_text": "Amir H Razavi, Diana Inkpen, Sasha Uritsky, and Stan Matwin. 2010. Offensive language detection using multi-level classification. In Proceedings of the Canadian Conference on Artificial Intelligence, pages 16-27.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Software Framework for Topic Modelling with Large Corpora", "authors": [ { "first": "Petr", "middle": [], "last": "Radim\u0159eh\u016f\u0159ek", "suffix": "" }, { "first": "", "middle": [], "last": "Sojka", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks", "volume": "", "issue": "", "pages": "45--50", "other_ids": {}, "num": null, "urls": [], "raw_text": "Radim\u0158eh\u016f\u0159ek and Petr Sojka. 2010. Software Frame- work for Topic Modelling with Large Corpora. In Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pages 45-50, Val- letta, Malta. ELRA.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Exploring the space of topic coherence measures", "authors": [ { "first": "Michael", "middle": [], "last": "R\u00f6der", "suffix": "" }, { "first": "Andreas", "middle": [], "last": "Both", "suffix": "" }, { "first": "Alexander", "middle": [], "last": "Hinneburg", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 8th ACM International Conference on Web Search and Data Mining", "volume": "", "issue": "", "pages": "399--408", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michael R\u00f6der, Andreas Both, and Alexander Hinneb- urg. 2015. Exploring the space of topic coherence measures. In Proceedings of the 8th ACM Interna- tional Conference on Web Search and Data Mining, pages 399-408.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Learning to select data for transfer learning with Bayesian optimization", "authors": [ { "first": "Sebastian", "middle": [], "last": "Ruder", "suffix": "" }, { "first": "Barbara", "middle": [], "last": "Plank", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "372--382", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sebastian Ruder and Barbara Plank. 2017. Learning to select data for transfer learning with Bayesian op- timization. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 372-382.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "A survey on hate speech detection using natural language processing", "authors": [ { "first": "Anna", "middle": [], "last": "Schmidt", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Wiegand", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the Fifth International Workshop on Natural Language Processing for Social Media", "volume": "", "issue": "", "pages": "1--10", "other_ids": {}, "num": null, "urls": [], "raw_text": "Anna Schmidt and Michael Wiegand. 2017. A survey on hate speech detection using natural language pro- cessing. In Proceedings of the Fifth International Workshop on Natural Language Processing for So- cial Media, pages 1-10.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Predictive biases in natural language processing models: A conceptual framework and overview", "authors": [ { "first": "", "middle": [], "last": "Deven Santosh", "suffix": "" }, { "first": "H", "middle": [ "Andrew" ], "last": "Shah", "suffix": "" }, { "first": "Dirk", "middle": [], "last": "Schwartz", "suffix": "" }, { "first": "", "middle": [], "last": "Hovy", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "5248--5264", "other_ids": {}, "num": null, "urls": [], "raw_text": "Deven Santosh Shah, H. Andrew Schwartz, and Dirk Hovy. 2020. Predictive biases in natural language processing models: A conceptual framework and overview. In Proceedings of the 58th Annual Meet- ing of the Association for Computational Linguistics, pages 5248-5264.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Directions in abusive language training data: Garbage in, garbage out", "authors": [ { "first": "Bertie", "middle": [], "last": "Vidgen", "suffix": "" }, { "first": "Leon", "middle": [], "last": "Derczynski", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2004.01670" ] }, "num": null, "urls": [], "raw_text": "Bertie Vidgen and Leon Derczynski. 2020. Direc- tions in abusive language training data: Garbage in, garbage out. arXiv preprint arXiv:2004.01670.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Challenges and frontiers in abusive content detection", "authors": [ { "first": "Bertie", "middle": [], "last": "Vidgen", "suffix": "" }, { "first": "Alex", "middle": [], "last": "Harris", "suffix": "" }, { "first": "Dong", "middle": [], "last": "Nguyen", "suffix": "" }, { "first": "Rebekah", "middle": [], "last": "Tromble", "suffix": "" }, { "first": "Scott", "middle": [], "last": "Hale", "suffix": "" }, { "first": "Helen", "middle": [], "last": "Margetts", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the Third Workshop on Abusive Language Online", "volume": "", "issue": "", "pages": "80--93", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bertie Vidgen, Alex Harris, Dong Nguyen, Rebekah Tromble, Scott Hale, and Helen Margetts. 2019a. Challenges and frontiers in abusive content detec- tion. In Proceedings of the Third Workshop on Abu- sive Language Online, pages 80-93, Florence, Italy.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "How much online abuse is there? Alan Turing Institute", "authors": [ { "first": "Bertie", "middle": [], "last": "Vidgen", "suffix": "" }, { "first": "Helen", "middle": [], "last": "Margetts", "suffix": "" }, { "first": "Alex", "middle": [], "last": "Harris", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bertie Vidgen, Helen Margetts, and Alex Harris. 2019b. How much online abuse is there? Alan Turing Insti- tute. November, 27.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Understanding abuse: A typology of abusive language detection subtasks", "authors": [ { "first": "Zeerak", "middle": [], "last": "Waseem", "suffix": "" }, { "first": "Thomas", "middle": [], "last": "Davidson", "suffix": "" }, { "first": "Dana", "middle": [], "last": "Warmsley", "suffix": "" }, { "first": "Ingmar", "middle": [], "last": "Weber", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the First Workshop on Abusive Language Online", "volume": "", "issue": "", "pages": "78--84", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zeerak Waseem, Thomas Davidson, Dana Warmsley, and Ingmar Weber. 2017. Understanding abuse: A typology of abusive language detection subtasks. In Proceedings of the First Workshop on Abusive Language Online, pages 78-84, Vancouver, BC, Canada.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Hateful symbols or hateful people? Predictive features for hate speech detection on Twitter", "authors": [ { "first": "Zeerak", "middle": [], "last": "Waseem", "suffix": "" }, { "first": "Dirk", "middle": [], "last": "Hovy", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the NAACL Student Research Workshop", "volume": "", "issue": "", "pages": "88--93", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zeerak Waseem and Dirk Hovy. 2016. Hateful sym- bols or hateful people? Predictive features for hate speech detection on Twitter. In Proceedings of the NAACL Student Research Workshop, pages 88-93.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Detection of Abusive Language: the Problem of Biased Datasets", "authors": [ { "first": "Michael", "middle": [], "last": "Wiegand", "suffix": "" }, { "first": "Josef", "middle": [], "last": "Ruppenhofer", "suffix": "" }, { "first": "Thomas", "middle": [], "last": "Kleinbauer", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "602--608", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michael Wiegand, Josef Ruppenhofer, and Thomas Kleinbauer. 2019. Detection of Abusive Language: the Problem of Biased Datasets. In Proceedings of the 2019 Conference of the North American Chap- ter of the Association for Computational Linguis- tics: Human Language Technologies, pages 602- 608, Minneapolis, Minnesota.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Inducing a lexicon of abusive words -a feature-based approach", "authors": [ { "first": "Michael", "middle": [], "last": "Wiegand", "suffix": "" }, { "first": "Josef", "middle": [], "last": "Ruppenhofer", "suffix": "" }, { "first": "Anna", "middle": [], "last": "Schmidt", "suffix": "" }, { "first": "Clayton", "middle": [], "last": "Greenberg", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "1046--1056", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michael Wiegand, Josef Ruppenhofer, Anna Schmidt, and Clayton Greenberg. 2018. Inducing a lexicon of abusive words -a feature-based approach. In Pro- ceedings of the 2018 Conference of the North Amer- ican Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1046-1056, New Orleans, Louisiana.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Ex machina: Personal attacks seen at scale", "authors": [ { "first": "Ellery", "middle": [], "last": "Wulczyn", "suffix": "" }, { "first": "Nithum", "middle": [], "last": "Thain", "suffix": "" }, { "first": "Lucas", "middle": [], "last": "Dixon", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 26th International Conference on World Wide Web", "volume": "", "issue": "", "pages": "1391--1399", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ellery Wulczyn, Nithum Thain, and Lucas Dixon. 2017. Ex machina: Personal attacks seen at scale. In Pro- ceedings of the 26th International Conference on World Wide Web, pages 1391-1399.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Semeval-2019 Task 6: Identifying and categorizing offensive language in social media (OffensEval)", "authors": [ { "first": "Marcos", "middle": [], "last": "Zampieri", "suffix": "" }, { "first": "Shervin", "middle": [], "last": "Malmasi", "suffix": "" }, { "first": "Preslav", "middle": [], "last": "Nakov", "suffix": "" }, { "first": "Sara", "middle": [], "last": "Rosenthal", "suffix": "" }, { "first": "Noura", "middle": [], "last": "Farra", "suffix": "" }, { "first": "Ritesh", "middle": [], "last": "Kumar", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 13th International Workshop on Semantic Evaluation", "volume": "", "issue": "", "pages": "75--86", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marcos Zampieri, Shervin Malmasi, Preslav Nakov, Sara Rosenthal, Noura Farra, and Ritesh Kumar. 2019. Semeval-2019 Task 6: Identifying and catego- rizing offensive language in social media (OffensE- val). In Proceedings of the 13th International Work- shop on Semantic Evaluation, pages 75-86.", "links": null } }, "ref_entries": { "FIGREF0": { "text": "The classifier's performance on various classes when trained on subsets of the Wiki-dataset with specific class distributions.", "uris": null, "type_str": "figure", "num": null }, "FIGREF1": { "text": "The classifier's average performance on various classes when trained on balanced subsets of the Wiki-dataset of different sizes.", "uris": null, "type_str": "figure", "num": null }, "FIGREF2": { "text": "The classifier's performance on various classes when trained on specific topic categories.", "uris": null, "type_str": "figure", "num": null }, "TABREF1": { "type_str": "table", "content": "", "html": null, "text": "Topics identified in the Wiki-dataset. For each topic, five of ten top words that are most representative of the assigned category are shown.", "num": null }, "TABREF3": { "type_str": "table", "content": "
", "html": null, "text": "Distribution of topic categories per class", "num": null }, "TABREF5": { "type_str": "table", "content": "
6 Generalizability of the Model Trained
on the Wiki-dataset
To explore how well the Toxic class from the Wiki-
dataset generalizes to other types of offensive be-
haviour, we train a binary classifier (Toxic vs. Nor-
mal) on the Wiki-dataset (combining the train, de-
velopment and test sets) and test it on the Out-
of-Domain Test set. This classifier is expected to
predict a positive (Toxic) label for the instances of
classes Founta-Abusive, Founta-Hateful, Waseem-
Sexism and Waseem-Racism, and a negative (Nor-
mal) label for the tweets in the Founta-Normal
class. We fine-tune a BERT-based classifier (De-
vlin et al., 2019) with a linear prediction layer, the
batch size of 16 and the learning rate of 2 \u00d7 10 \u22125
for 2 epochs.
Evaluation metrics: In order to investigate the
trade-off between the True Positive and True Nega-
tive rates, in the following experiments we report
accuracy per test class. Accuracy per class is cal-
culated as the rate of correctly identified instances
within a class. Accuracy over the toxic classes
(Founta-Abusive, Founta-Hateful, Waseem-Sexism
and Waseem-Racism) indicates the True Positive
rate, while accuracy of the normal class (Founta-
Normal) measures the True Negative rate.
", "html": null, "text": "Accuracy per test class and topic category for a classifier trained on Wiki-dataset. Best results in each row are in bold.", "num": null }, "TABREF7": { "type_str": "table", "content": "", "html": null, "text": "Accuracy per Out-of-Domain test class for a classifier trained on the Wiki-dataset, and the Wikidataset with Category 3 instances (Normal only or all) excluded.", "num": null } } } }