ACL-OCL / Base_JSON /prefixA /json /alw /2020.alw-1.4.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
raw
history blame contribute delete
No virus
68.9 kB
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T02:11:53.095959Z"
},
"title": "Fine-tuning for multi-domain and multi-label uncivil language detection",
"authors": [
{
"first": "Kadir",
"middle": [
"Bulut"
],
"last": "Ozler",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Arizona",
"location": {}
},
"email": "kbozler@email.arizona.edu"
},
{
"first": "Kate",
"middle": [
"M"
],
"last": "Kenski",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Arizona",
"location": {}
},
"email": "kkenski@email.arizona.edu"
},
{
"first": "Stephen",
"middle": [
"A"
],
"last": "Rains",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Arizona",
"location": {}
},
"email": "srains@email.arizona.edu"
},
{
"first": "Yotam",
"middle": [],
"last": "Shmargad",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Arizona",
"location": {}
},
"email": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Coe",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Utah",
"location": {}
},
"email": "kevin.coe@utah.edu"
},
{
"first": "Steven",
"middle": [],
"last": "Bethard",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Arizona",
"location": {}
},
"email": "bethard@email.arizona.edu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Incivility is a problem on social media, and it comes in many forms (name-calling, vulgarity, threats, etc.) and domains (microblog posts, online news comments, Wikipedia edits, etc.). Training machine learning models to detect such incivility must handle the multilabel and multi-domain nature of the problem. We present a BERT-based model for incivility detection and propose several approaches for training it for multi-label and multi-domain datasets. We find that individual binary classifiers outperform a joint multi-label classifier, and that simply combining multiple domains of training data outperforms other recentlyproposed fine-tuning strategies. We also establish new state-of-the-art performance on several incivility detection datasets.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "Incivility is a problem on social media, and it comes in many forms (name-calling, vulgarity, threats, etc.) and domains (microblog posts, online news comments, Wikipedia edits, etc.). Training machine learning models to detect such incivility must handle the multilabel and multi-domain nature of the problem. We present a BERT-based model for incivility detection and propose several approaches for training it for multi-label and multi-domain datasets. We find that individual binary classifiers outperform a joint multi-label classifier, and that simply combining multiple domains of training data outperforms other recentlyproposed fine-tuning strategies. We also establish new state-of-the-art performance on several incivility detection datasets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "In 2019, 93% of Americans identify incivility as a problem, with 68% classifying it as a \"major\" problem, and those who experienced incivility faced on average 10.2 uncivil interactions each week (Weber Shandwick et al., 2019) . Of those who expect civility to get worse, \"social media/the Internet\" tops the list of what they blame, above \"the White House\", \"politicians in general\", \"the news media\", etc. Especially on social media and the Internet, this incivility often takes the form of uncivil language, features of discussion that convey an unnecessarily disrespectful tone toward the discussion forum, its participants, or its topics (Coe et al., 2014) .",
"cite_spans": [
{
"start": 196,
"end": 226,
"text": "(Weber Shandwick et al., 2019)",
"ref_id": "BIBREF15"
},
{
"start": 643,
"end": 661,
"text": "(Coe et al., 2014)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Uncivil language can range from name-calling (e.g., Mark, you're some kind of special stupid) to vulgarity (e.g., Just build the damn mine already!) to threats (e.g., Fine. I will destroy you.) and beyond. Different types of incivilities often appear in the same utterance (e.g., name-calling, vulgarity, and threats are all included in SHUT UP, YOU FAT POOP, OR I WILL KICK YOUR ASS!!!). Uncivil language appears in many places online, from microblogs like Twitter, to comments on online newspapers, to edit histories of resources like Wikipedia.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Uncivil language detection is thus a multi-label and multi-domain language processing problem. While there has been much research in natural language processing methods for identifying such incivility, especially in the subarea of abusive language (Wiegand et al., 2019; Zampieri et al., 2019; Basile et al., 2019; Sadeque et al., 2019; van Aken et al., 2018, etc.) , the multi-label and multi-domain nature of incivility detection is understudied. We thus consider incivility detection on several datasets that (1) require the classification of incivility into several not-mutually-exclusive fine-grained categories, and (2) cover multiple genres of online interactions. Our contributions are:",
"cite_spans": [
{
"start": 248,
"end": 270,
"text": "(Wiegand et al., 2019;",
"ref_id": "BIBREF16"
},
{
"start": 271,
"end": 293,
"text": "Zampieri et al., 2019;",
"ref_id": "BIBREF18"
},
{
"start": 294,
"end": 314,
"text": "Basile et al., 2019;",
"ref_id": "BIBREF2"
},
{
"start": 315,
"end": 336,
"text": "Sadeque et al., 2019;",
"ref_id": "BIBREF14"
},
{
"start": 337,
"end": 365,
"text": "van Aken et al., 2018, etc.)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We achieved a new state-of-the-art on both the Coe et al. (2014) and Conversation AI (2018) datasets using BERT (Devlin et al., 2019 ). \u2022 We compared several algorithms for training classifiers across the multiple domains in these datasets and showed that combining the training data from all domains outperforms other recently-proposed fine-tuning strategies. \u2022 We compared several approaches for handling the multi-label nature of these datasets and showed that independent binary classifiers outperform jointly-trained models.",
"cite_spans": [
{
"start": 49,
"end": 66,
"text": "Coe et al. (2014)",
"ref_id": "BIBREF3"
},
{
"start": 114,
"end": 134,
"text": "(Devlin et al., 2019",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We frame uncivil language detection as a multilabel text classification problem, where the input is a piece of text, and the outputs are the types of incivilities (name-calling, vulgarity, etc.) that are present. Formally, we aim to learn a function h such that for each piece of text x: where repr(x) is a tensor representing that text (e.g., a series of word vectors), and y is a binary vector where y i is 1 if x contains the i th form of incivility and 0 otherwise.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task",
"sec_num": "2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "h(repr(x)) = y",
"eq_num": "(1)"
}
],
"section": "Task",
"sec_num": "2"
},
{
"text": "We frame learning such h functions a multidomain classifier training problem, where training and testing data are drawn from multiple domains (news comments, politician tweets, etc.). Formally, given a domain D i , we aim to learn a function",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task",
"sec_num": "2"
},
{
"text": "h D i that maximizes performance on test data D itest by training on examples (x, y) drawn from training data D 1 train D 2 train . . . D n train .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task",
"sec_num": "2"
},
{
"text": "We consider the following datasets for evaluating multi-label and multi-domain incivility detection.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "3"
},
{
"text": "Local news comments In this multi-label dataset, the following labels are defined and used to annotate online comments on local news articles by Coe et al. (2014):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "3"
},
{
"text": "\u2022 aspersion: \"Mean-spirited or disparaging words directed at a person or group of people.\" \u2022 lying accusation: \"Mean-spirited or disparaging words directed at an idea, plan, policy, or behavior.\" \u2022 name-calling: \"Stating or implying that an idea, plan, or policy was disingenuous.\" \u2022 pejorative: \"Using profanity or language that would not be considered proper (e.g., pissed, screw) inprofessional discourse.\" \u2022 vulgarity: \"Disparaging remark about the way in which a person communicates.\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "3"
},
{
"text": "Local politics Tweets Coe and colleagues also annotated a collection of microblog posts from the Twitter accounts of their local politicians, but only for name-calling incivility.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "3"
},
{
"text": "Russian troll Tweets Coe and colleagues also annotated a small subset of the 3 million English Tweets written by Russian trolls and collected by Linvill and Warren (2018) 1 , again for just name-calling incivility. Wikipedia comments In this multi-label dataset, also known as the Kaggle Toxic Comment Classification Challenge, Jigsaw/Google's Conversation AI team annotated comments from Wikipedia's talk page edits (Conversation AI, 2018) for the presence of the following types of abusive language, defined by Perspective AI (2020).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "3"
},
{
"text": "\u2022 toxic: \"A rude, disrespectful, or unreasonable comment that is likely to make people leave a discussion.\" \u2022 severe-toxic: \"A very hateful, aggressive, disrespectful comment or otherwise very likely to make a user leave a discussion or give up on sharing their perspective. This attribute is much less sensitive to more mild forms of toxicity, such as comments that include positive uses of curse words.\" \u2022 obscene: \"Swear words, curse words, or other obscene or profane language.\" \u2022 threat: \"Describes an intention to inflict pain, injury, or violence against an individual or group.\" \u2022 insult: \"Insulting, inflammatory, or negative comment towards a person or a group of people.\" \u2022 identity-hate: \"Negative or hateful comments targeting someone because of their identity.\" Table 1 shows statistics for the different data sets.",
"cite_spans": [],
"ref_spans": [
{
"start": 776,
"end": 783,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Data",
"sec_num": "3"
},
{
"text": "The three datasets annotated by Coe and colleagues can be used in multi-domain experiments, as they share the same annotation scheme. They share only the label name-calling, so our multi-domain experiments consider only binary classification. The local news comments and Wikipedia comments datasets can be used in multi-label experiments, as they have been annotated for multiple forms of incivility. They do not share annotation schemes, so our multi-label experiments consider each multi-label dataset separately.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "3"
},
{
"text": "There is much recent work on detecting incivility (also referred to as toxicity, abusive language, offensive language, etc.) in social media. Wiegand et al. 2019presents an overview of such efforts and shows that many datasets constructed for this purpose have unintended bias because of how they have been sampled. We focus on the Coe et al. 2014and Conversation AI (2018) datasets because they do not have the problems with topicbiased sampling that some other datasets do, where topic words are better predictors of incivility than uncivil words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Prior Work",
"sec_num": "4"
},
{
"text": "There have also been several recent shared tasks that consider incivility. Both the OffensEval shared task (Zampieri et al., 2019) and the HatEval (Basile et al., 2019) shared task ran as part of SemEval-2019 and considered detection of various forms of offensive and hate speech. Neither of these tasks focused on a multi-label or multi-domain problem.",
"cite_spans": [
{
"start": 107,
"end": 130,
"text": "(Zampieri et al., 2019)",
"ref_id": "BIBREF18"
},
{
"start": 147,
"end": 168,
"text": "(Basile et al., 2019)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Prior Work",
"sec_num": "4"
},
{
"text": "A few models have been designed for and evaluated on the multi-label, multi-domain corpora we consider. Sadeque et al. (2019) considered the local news comments corpus, training recurrent neural network models, and focusing on only the top two most frequent labels for this dataset. They achieved 0.48 F 1 for name-calling and 0.53 F 1 for vulgarity. van Aken et al. (2018) presented multiple approaches to the Wikipedia comments dataset. They developed an ensemble of logistic regression, recurrent neural networks, and convolutional neural networks, achieving an AUC score of 0.983.",
"cite_spans": [
{
"start": 104,
"end": 125,
"text": "Sadeque et al. (2019)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Prior Work",
"sec_num": "4"
},
{
"text": "There are a few recent works in cross-domain abusive language detection. Wiegand et al. (2018) ; Karan and\u0160najder (2018); Pamungkas and Patti (2019) all explore training models on one abusive language dataset and testing on another. They focus on binary predictions and bag-of-words support vector machine classifiers (though Pamungkas and Patti (2019) also explores a recurrent neural network). They do not consider multi-label problems, or modern pre-trained neural networks like BERT, which were more successful in recent shared tasks on abusive language (Zampieri et al., 2019) . They also evaluate on several datasets that have been identified as problematic by Wiegand et al. (2019) due to their use of topic-biased sampling.",
"cite_spans": [
{
"start": 73,
"end": 94,
"text": "Wiegand et al. (2018)",
"ref_id": "BIBREF17"
},
{
"start": 558,
"end": 581,
"text": "(Zampieri et al., 2019)",
"ref_id": "BIBREF18"
},
{
"start": 667,
"end": 688,
"text": "Wiegand et al. (2019)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Prior Work",
"sec_num": "4"
},
{
"text": "We use BERT (Devlin et al., 2019) as the starting point for all experiments. BERT is a pre-trained transformer-based neural network that has shown impressive performance on a wide variety of NLP tasks. We follow the standard approach for finetuning BERT for text classification, placing a fully connected layer over BERT's [CLS] output. We use n sigmoids on this layer rather than a softmax activation, since we are performing multi-label classification. BERT is then fine-tuned as usual, with hyperparameters like learning rate, maximum sequence length, number of epochs, training batch size tuned on the development set. We explored each hyperparameter within the following ranges:",
"cite_spans": [
{
"start": 12,
"end": 33,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "learning rate: 8e-6, 2e-5, 4e-5, 8e-5 maximum sequence length: 128, 256, 512 number of epochs: 2, 3, 4, 5, 6, 8 training batch size: 16, 32, 64, 128",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "We consider three methods for training classifiers for prediction in multiple domains:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-domain models",
"sec_num": "5.1"
},
{
"text": "Single One classifier is fine-tuned for each domain. Joint One classifier is fine-tuned on the combined training data from all the domains. Joint\u2192Single First, a joint classifier is fine-tuned.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-domain models",
"sec_num": "5.1"
},
{
"text": "Then, the joint classifier parameters are used to initialize n individual classifiers, one for each domain. This approach is inspired by Liu et al. (2019a) , where for some natural language understanding problems, they found that multi-task fine-tuning followed by individual task fine-tuning outperformed multitask fine-tuning alone.",
"cite_spans": [
{
"start": 137,
"end": 155,
"text": "Liu et al. (2019a)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-domain models",
"sec_num": "5.1"
},
{
"text": "Since our multi-domain datasets share only the label name-calling, we train our multi-domain classifiers only for binary classification (i.e., they are not also multi-label). Sadeque et al. (2019) is the previous state-of-the-art on the local news comments. There is no prior state-of-the-art for the other datasets. Table 2 shows the results of these experiments. The first three rows compare the different training procedures on the development sets. We find that simply combining all the data achieves the best F 1 for both the local news comments and Russian troll Tweets data, and similar F 1 to the more complicated Joint\u2192Single procedure in the remaining dataset. When we evaluate this best model on the test data, we achieve a new state-of-the-art on the local news comments corpus, 0.56 F 1 . We are the first to report results on the local politics Tweets and Russian troll Tweets domains, as Sadeque et al. (2019) did not evaluate on these.",
"cite_spans": [
{
"start": 175,
"end": 196,
"text": "Sadeque et al. (2019)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [
{
"start": 317,
"end": 324,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Multi-domain models",
"sec_num": "5.1"
},
{
"text": "These results did not replicate the findings of Liu et al. (2019a) when applied to our incivility datasets; the extra fine-tuning for each domain was unhelpful, and simply combining all the data was the best. This probably argues for exploring other approaches for domain adapatation, e.g., Kim et al. (2016) , but it may also simply suggest that Coe et al. (2014)'s annotators were consistent across datasets, making it easy for BERT to learn the core linguistic phenomenon despite differences in domains.",
"cite_spans": [
{
"start": 291,
"end": 308,
"text": "Kim et al. (2016)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-domain models",
"sec_num": "5.1"
},
{
"text": "Similar to our approach for multi-domain models, we consider three methods for training classifiers for multi-label prediction:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-label models",
"sec_num": "5.2"
},
{
"text": "Single One binary classifier is fine-tuned for each label. The output layer of the model is a single sigmoid unit. Joint One joint classifier is fine-tuned for all labels.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-label models",
"sec_num": "5.2"
},
{
"text": "The output layer of the model is n sigmoid units, one for each label. Joint\u2192Single First, a joint classifier is fine-tuned.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-label models",
"sec_num": "5.2"
},
{
"text": "Then, the joint classifier parameters are used to initialize n binary classifiers, one for each label. This is again inspired by the multi-task training procedure of Liu et al. (2019a) .",
"cite_spans": [
{
"start": 166,
"end": 184,
"text": "Liu et al. (2019a)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-label models",
"sec_num": "5.2"
},
{
"text": "Since our multi-label datasets do not share an annotation scheme, we train the multi-label classifiers on only one dataset at a time (i.e., they are not also multi-domain). Table 3 shows the results of these experiments 2 . We find that in most cases training individual binary classifiers (Single) is better than a jointly-learned multi-label classifier (Joint). This is somewhat surprising as the latter is the standard approach with neural networks (Adhikari et al., 2019) .",
"cite_spans": [
{
"start": 452,
"end": 475,
"text": "(Adhikari et al., 2019)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [
{
"start": 173,
"end": 180,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Multi-label models",
"sec_num": "5.2"
},
{
"text": "Curious if the problem was some low-frequency classes, we tried training a multi-label model on just the three most frequent classes of the Wikipedia comments dataset (Joint top-3 classes), toxic, obscene, and insult. That slightly improved performance on those three classes, but of course at the cost of the classes now being ignored. Adding the staged training procedure (Joint\u2192Single) on top of this classifier only decreased performance. This suggests that class imbalance may be part of the problem, but is not the full explanation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-label models",
"sec_num": "5.2"
},
{
"text": "Note that we are the first to report all individual label F 1 s on both datasets. In the case of the local news comments data, this is because Sadeque et al. 2019, noting the class imbalance problem, decided to only train and evaluate on two classes. In the case of the Wikipedia comments data, this is because the official evaluation metric is AUC, so most systems focused on optimizing this measure. However, as Table 3 shows, while we achieve a state-of-the-art AUC, AUC is not a very discriminative measure for this dataset. For example, both the Single model that predicts all six classes and the Joint top-3 classes model that doesn't even try to predicts severe-toxic, threat, or Table 3 : Multi-label results: Performance on each label, for different multi-label training methods across different datasets. When results from two or more models are comparable, the highest performance is marked in bold. The final column is the official evaluation measure for the Wikipedia comments dataset. Sadeque et al. (2019) is the state-of-the-art on the local news comments data, and van Aken et al. 2018is the state-of-the-art on the Wikipedia comments data.",
"cite_spans": [
{
"start": 999,
"end": 1020,
"text": "Sadeque et al. (2019)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [
{
"start": 414,
"end": 421,
"text": "Table 3",
"ref_id": null
},
{
"start": 687,
"end": 694,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Multi-label models",
"sec_num": "5.2"
},
{
"text": "identity-hate achieve the same AUC of 0.990. The F 1 scores more clearly show that the Joint top-3 classes model is as good or better for all labels but insult.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-label models",
"sec_num": "5.2"
},
{
"text": "We focused on a BERT-based model due to its top-ranking performance in related shared tasks (Zampieri et al., 2019) , but recent advances over BERT, e.g., RoBERTa (Liu et al., 2019b ) might yield additional gains. We also focused on the limited number of datasets that could support multilabel and/or multi-domain experiments, but our results could be strengthened by creating new multilabel, multi-domain datasets. Finally, class imbalance only partly explains why a joint multi-label classifier failed to outperform independent binary classifiers, indicating that further investigation is needed into multi-label classification approaches for uncivil language.",
"cite_spans": [
{
"start": 92,
"end": 115,
"text": "(Zampieri et al., 2019)",
"ref_id": "BIBREF18"
},
{
"start": 163,
"end": 181,
"text": "(Liu et al., 2019b",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Limitations",
"sec_num": "6"
},
{
"text": "We applied BERT on multi-label and multi-domain incivility detection tasks and achieved a new stateof-the-art on several different datasets. In exploring different training procedures, we found that it was better to directly combine data from multiple domains than other more complex procedures, and that it was better to train individual binary classifiers than to train a joint multi-label classifier.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "https://github.com/fivethirtyeight/ russian-troll-tweets/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Note that the Wikipedia comments dataset does not have a development split, so \"Dev\" experiments on that dataset are actually on the test set, following van Aken et al. (2018).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Rethinking complex neural network architectures for document classification",
"authors": [
{
"first": "Ashutosh",
"middle": [],
"last": "Adhikari",
"suffix": ""
},
{
"first": "Achyudh",
"middle": [],
"last": "Ram",
"suffix": ""
},
{
"first": "Raphael",
"middle": [],
"last": "Tang",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4046--4051",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1408"
]
},
"num": null,
"urls": [],
"raw_text": "Ashutosh Adhikari, Achyudh Ram, Raphael Tang, and Jimmy Lin. 2019. Rethinking complex neural net- work architectures for document classification. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Compu- tational Linguistics: Human Language Technolo- gies, Volume 1 (Long and Short Papers), pages 4046-4051, Minneapolis, Minnesota. Association for Computational Linguistics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Challenges for toxic comment classification: An in-depth error analysis",
"authors": [
{
"first": "Julian",
"middle": [],
"last": "Betty Van Aken",
"suffix": ""
},
{
"first": "Ralf",
"middle": [],
"last": "Risch",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Krestel",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "L\u00f6ser",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2nd Workshop on Abusive Language Online (ALW2)",
"volume": "",
"issue": "",
"pages": "33--42",
"other_ids": {
"DOI": [
"10.18653/v1/W18-5105"
]
},
"num": null,
"urls": [],
"raw_text": "Betty van Aken, Julian Risch, Ralf Krestel, and Alexan- der L\u00f6ser. 2018. Challenges for toxic comment clas- sification: An in-depth error analysis. In Proceed- ings of the 2nd Workshop on Abusive Language On- line (ALW2), pages 33-42, Brussels, Belgium. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "SemEval-2019 task 5: Multilingual detection of hate speech against immigrants and women in Twitter",
"authors": [
{
"first": "Valerio",
"middle": [],
"last": "Basile",
"suffix": ""
},
{
"first": "Cristina",
"middle": [],
"last": "Bosco",
"suffix": ""
},
{
"first": "Elisabetta",
"middle": [],
"last": "Fersini",
"suffix": ""
},
{
"first": "Debora",
"middle": [],
"last": "Nozza",
"suffix": ""
},
{
"first": "Viviana",
"middle": [],
"last": "Patti",
"suffix": ""
},
{
"first": "Francisco Manuel Rangel",
"middle": [],
"last": "Pardo",
"suffix": ""
},
{
"first": "Paolo",
"middle": [],
"last": "Rosso",
"suffix": ""
},
{
"first": "Manuela",
"middle": [],
"last": "Sanguinetti",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 13th International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "54--63",
"other_ids": {
"DOI": [
"10.18653/v1/S19-2007"
]
},
"num": null,
"urls": [],
"raw_text": "Valerio Basile, Cristina Bosco, Elisabetta Fersini, Debora Nozza, Viviana Patti, Francisco Manuel Rangel Pardo, Paolo Rosso, and Manuela San- guinetti. 2019. SemEval-2019 task 5: Multilin- gual detection of hate speech against immigrants and women in Twitter. In Proceedings of the 13th Inter- national Workshop on Semantic Evaluation, pages 54-63, Minneapolis, Minnesota, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Online and Uncivil? Patterns and Determinants of Incivility in Newspaper Website Comments",
"authors": [
{
"first": "Kevin",
"middle": [],
"last": "Coe",
"suffix": ""
},
{
"first": "Kate",
"middle": [],
"last": "Kenski",
"suffix": ""
},
{
"first": "Stephen",
"middle": [
"A"
],
"last": "Rains",
"suffix": ""
}
],
"year": 2014,
"venue": "Journal of Communication",
"volume": "64",
"issue": "4",
"pages": "658--679",
"other_ids": {
"DOI": [
"10.1111/jcom.12104"
]
},
"num": null,
"urls": [],
"raw_text": "Kevin Coe, Kate Kenski, and Stephen A. Rains. 2014. Online and Uncivil? Patterns and Determinants of Incivility in Newspaper Website Comments. Jour- nal of Communication, 64(4):658-679.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Toxic comment classification challenge",
"authors": [],
"year": 2018,
"venue": "Conversation AI",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Conversation AI. 2018. Toxic comment classifica- tion challenge. https://www.kaggle.com/c/ jigsaw-toxic-comment-classification- challenge.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1423"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Cross-domain detection of abusive language online",
"authors": [
{
"first": "Mladen",
"middle": [],
"last": "Karan",
"suffix": ""
},
{
"first": "Jan\u0161najder",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2nd Workshop on Abusive Language Online (ALW2)",
"volume": "",
"issue": "",
"pages": "132--137",
"other_ids": {
"DOI": [
"10.18653/v1/W18-5117"
]
},
"num": null,
"urls": [],
"raw_text": "Mladen Karan and Jan\u0160najder. 2018. Cross-domain detection of abusive language online. In Proceed- ings of the 2nd Workshop on Abusive Language On- line (ALW2), pages 132-137, Brussels, Belgium. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Frustratingly easy neural domain adaptation",
"authors": [
{
"first": "Young-Bum",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Karl",
"middle": [],
"last": "Stratos",
"suffix": ""
},
{
"first": "Ruhi",
"middle": [],
"last": "Sarikaya",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers",
"volume": "",
"issue": "",
"pages": "387--396",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Young-Bum Kim, Karl Stratos, and Ruhi Sarikaya. 2016. Frustratingly easy neural domain adaptation. In Proceedings of COLING 2016, the 26th Inter- national Conference on Computational Linguistics: Technical Papers, pages 387-396, Osaka, Japan. The COLING 2016 Organizing Committee.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Troll factories: The internet research agency and state-sponsored agenda building",
"authors": [
{
"first": "Darren",
"middle": [
"L"
],
"last": "Linvill",
"suffix": ""
},
{
"first": "Patrick",
"middle": [
"L"
],
"last": "Warren",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Darren L. Linvill and Patrick L. Warren. 2018. Troll factories: The internet research agency and state-sponsored agenda building.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Multi-lingual Wikipedia summarization and title generation on low resource corpus",
"authors": [
{
"first": "Wei",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Lei",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Zuying",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Yinan",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Workshop MultiLing 2019: Summarization Across Languages, Genres and Sources",
"volume": "",
"issue": "",
"pages": "17--25",
"other_ids": {
"DOI": [
"10.26615/978-954-452-058-8_004"
]
},
"num": null,
"urls": [],
"raw_text": "Wei Liu, Lei Li, Zuying Huang, and Yinan Liu. 2019a. Multi-lingual Wikipedia summarization and title generation on low resource corpus. In Proceedings of the Workshop MultiLing 2019: Summarization Across Languages, Genres and Sources, pages 17- 25, Varna, Bulgaria. INCOMA Ltd.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Roberta: A robustly optimized bert pretraining approach",
"authors": [
{
"first": "Yinhan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Jingfei",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "Mandar",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1907.11692"
]
},
"num": null,
"urls": [],
"raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019b. Roberta: A robustly optimized bert pretraining ap- proach. arXiv preprint arXiv:1907.11692.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Cross-domain and cross-lingual abusive language detection: A hybrid approach with deep learning and a multilingual lexicon",
"authors": [
{
"first": "Wahyu",
"middle": [],
"last": "Endang",
"suffix": ""
},
{
"first": "Viviana",
"middle": [],
"last": "Pamungkas",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Patti",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics: Student Research Workshop",
"volume": "",
"issue": "",
"pages": "363--370",
"other_ids": {
"DOI": [
"10.18653/v1/P19-2051"
]
},
"num": null,
"urls": [],
"raw_text": "Endang Wahyu Pamungkas and Viviana Patti. 2019. Cross-domain and cross-lingual abusive language detection: A hybrid approach with deep learning and a multilingual lexicon. In Proceedings of the 57th Annual Meeting of the Association for Com- putational Linguistics: Student Research Workshop, pages 363-370, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Available attributes and languages",
"authors": [
{
"first": "A",
"middle": [
"I"
],
"last": "Perspective",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Perspective AI. 2020. Available attributes and lan- guages. https://support.perspectiveapi.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "com/s/about-the-api-attributes-andlanguages",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "com/s/about-the-api-attributes-and- languages.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Incivility detection in online comments",
"authors": [
{
"first": "Farig",
"middle": [],
"last": "Sadeque",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Rains",
"suffix": ""
},
{
"first": "Yotam",
"middle": [],
"last": "Shmargad",
"suffix": ""
},
{
"first": "Kate",
"middle": [],
"last": "Kenski",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Coe",
"suffix": ""
},
{
"first": "Steven",
"middle": [],
"last": "Bethard",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Eighth Joint Conference on Lexical and Computational Semantics (*SEM 2019)",
"volume": "",
"issue": "",
"pages": "283--291",
"other_ids": {
"DOI": [
"10.18653/v1/S19-1031"
]
},
"num": null,
"urls": [],
"raw_text": "Farig Sadeque, Stephen Rains, Yotam Shmargad, Kate Kenski, Kevin Coe, and Steven Bethard. 2019. Inci- vility detection in online comments. In Proceedings of the Eighth Joint Conference on Lexical and Com- putational Semantics (*SEM 2019), pages 283-291, Minneapolis, Minnesota. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Civilty in america 2019: Solutions for tomorrow",
"authors": [
{
"first": "Weber",
"middle": [],
"last": "Shandwick",
"suffix": ""
},
{
"first": "Powell",
"middle": [],
"last": "Tate",
"suffix": ""
},
{
"first": "Krc",
"middle": [],
"last": "Research",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Weber Shandwick, Powell Tate, and KRC Research. 2019. Civilty in america 2019: Solutions for tomorrow. https://www.webershandwick.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Detection of Abusive Language: the Problem of Biased Datasets",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Wiegand",
"suffix": ""
},
{
"first": "Josef",
"middle": [],
"last": "Ruppenhofer",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Kleinbauer",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "602--608",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1060"
]
},
"num": null,
"urls": [],
"raw_text": "Michael Wiegand, Josef Ruppenhofer, and Thomas Kleinbauer. 2019. Detection of Abusive Language: the Problem of Biased Datasets. In Proceedings of the 2019 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 602-608, Minneapolis, Minnesota. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Inducing a lexicon of abusive words -a feature-based approach",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Wiegand",
"suffix": ""
},
{
"first": "Josef",
"middle": [],
"last": "Ruppenhofer",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Schmidt",
"suffix": ""
},
{
"first": "Clayton",
"middle": [],
"last": "Greenberg",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "1046--1056",
"other_ids": {
"DOI": [
"10.18653/v1/N18-1095"
]
},
"num": null,
"urls": [],
"raw_text": "Michael Wiegand, Josef Ruppenhofer, Anna Schmidt, and Clayton Greenberg. 2018. Inducing a lexicon of abusive words -a feature-based approach. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1046-1056, New Orleans, Louisiana. Association for Computational Linguistics.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "SemEval-2019 task 6: Identifying and categorizing offensive language in social media (OffensEval)",
"authors": [
{
"first": "Marcos",
"middle": [],
"last": "Zampieri",
"suffix": ""
},
{
"first": "Shervin",
"middle": [],
"last": "Malmasi",
"suffix": ""
},
{
"first": "Preslav",
"middle": [],
"last": "Nakov",
"suffix": ""
},
{
"first": "Sara",
"middle": [],
"last": "Rosenthal",
"suffix": ""
},
{
"first": "Noura",
"middle": [],
"last": "Farra",
"suffix": ""
},
{
"first": "Ritesh",
"middle": [],
"last": "Kumar",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 13th International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "75--86",
"other_ids": {
"DOI": [
"10.18653/v1/S19-2010"
]
},
"num": null,
"urls": [],
"raw_text": "Marcos Zampieri, Shervin Malmasi, Preslav Nakov, Sara Rosenthal, Noura Farra, and Ritesh Kumar. 2019. SemEval-2019 task 6: Identifying and categorizing offensive language in social media (OffensEval). In Proceedings of the 13th Interna- tional Workshop on Semantic Evaluation, pages 75- 86, Minneapolis, Minnesota, USA. Association for Computational Linguistics.",
"links": null
}
},
"ref_entries": {
"TABREF1": {
"num": null,
"text": "Statistics for the multi-domain and multi-label datasets considered. For data sets with no standard split, or where the test set is unavailable as in Conversation AI (2018), we created our own custom train/dev split.",
"html": null,
"type_str": "table",
"content": "<table/>"
},
"TABREF3": {
"num": null,
"text": "Multi-domain results: Performance on the label name-calling, for different multi-domain training methods across different datasets. When results from two or more models are comparable, the highest performance is marked in bold.",
"html": null,
"type_str": "table",
"content": "<table/>"
}
}
}
}