{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T02:11:43.761204Z" }, "title": "Six Attributes of Unhealthy Conversations", "authors": [ { "first": "Ilan", "middle": [], "last": "Price", "suffix": "", "affiliation": {}, "email": "ilan.price@maths.ox.ac.uk" }, { "first": "Jordan", "middle": [], "last": "Gifford-Moore", "suffix": "", "affiliation": {}, "email": "jordan.gifford-moore@flinders.edu.au" }, { "first": "Jory", "middle": [], "last": "Fleming", "suffix": "", "affiliation": {}, "email": "fleminj6@mailbox.sc.edu" }, { "first": "Saul", "middle": [], "last": "Musker", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Maayan", "middle": [], "last": "Roichman", "suffix": "", "affiliation": {}, "email": "maayan.roichman@anthro.ox.ac.uk" }, { "first": "Guillaume", "middle": [], "last": "Sylvain", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Nithum", "middle": [], "last": "Thain", "suffix": "", "affiliation": {}, "email": "nthain@google.com" }, { "first": "Lucas", "middle": [], "last": "Dixon", "suffix": "", "affiliation": {}, "email": "ldixon@google.com" }, { "first": "Jeffrey", "middle": [], "last": "Sorensen", "suffix": "", "affiliation": {}, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We present a new dataset of approximately 44000 comments labeled by crowdworkers. Each comment is labelled as either 'healthy' or 'unhealthy', in addition to binary labels for the presence of six potentially 'unhealthy' sub-attributes: (1) hostile; (2) antagonistic, insulting, provocative or trolling; (3) dismissive; (4) condescending or patronising; (5) sarcastic; and/or (6) an unfair generalisation. Each label also has an associated confidence score. We argue that there is a need for datasets which enable research based on a broad notion of 'unhealthy online conversation'. We build this typology to encompass a substantial proportion of the individual comments which contribute to unhealthy online conversation. For some of these attributes, this is the first publicly available dataset of this scale. We explore the quality of the dataset, present some summary statistics and initial models to illustrate the utility of this data, and highlight limitations and directions for further research.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "We present a new dataset of approximately 44000 comments labeled by crowdworkers. Each comment is labelled as either 'healthy' or 'unhealthy', in addition to binary labels for the presence of six potentially 'unhealthy' sub-attributes: (1) hostile; (2) antagonistic, insulting, provocative or trolling; (3) dismissive; (4) condescending or patronising; (5) sarcastic; and/or (6) an unfair generalisation. Each label also has an associated confidence score. We argue that there is a need for datasets which enable research based on a broad notion of 'unhealthy online conversation'. We build this typology to encompass a substantial proportion of the individual comments which contribute to unhealthy online conversation. For some of these attributes, this is the first publicly available dataset of this scale. We explore the quality of the dataset, present some summary statistics and initial models to illustrate the utility of this data, and highlight limitations and directions for further research.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Analysis of online user discussion continues to be a critical area of interdisciplinary research. Increasing rates of internet access and the development of a diverse range of online forums has allowed for conversation between individuals across the globe on an extraordinary range of topics. However, this has been accompanied by a surge in abuse and other negative behaviours online, the impacts of which have been well-documented in academic research. It has been found that targeted negative comments and harassment online can seriously impact individual well-being (Weingartner and Stahel, 2019; Bauman, 2013) , force users to leave a community or reduce online participation (Wulczyn et al., 2017; Blackburn and Kwak, 2014) , and potentially lead to offline hate-crimes (Mulki et al., 2019; Hassan et al., 2018) . While these forms of comments may be explicit or overtly harmful, they are also often difficult to detect or ambiguous. Where there are insufficient moderation resources to scale with a forum's user-base, this can lead to unchecked negative discourse, or cause website administrators to restrict user comment functions. This means that research which aims to enable automated moderation, provide a review triage service for human moderation teams, or design systems to nudge users towards healthier conversation, has significant potential for contributing to both the availability and quality of online discourse.", "cite_spans": [ { "start": 570, "end": 600, "text": "(Weingartner and Stahel, 2019;", "ref_id": "BIBREF27" }, { "start": 601, "end": 614, "text": "Bauman, 2013)", "ref_id": "BIBREF2" }, { "start": 681, "end": 703, "text": "(Wulczyn et al., 2017;", "ref_id": "BIBREF28" }, { "start": 704, "end": 729, "text": "Blackburn and Kwak, 2014)", "ref_id": "BIBREF3" }, { "start": 776, "end": 796, "text": "(Mulki et al., 2019;", "ref_id": "BIBREF17" }, { "start": 797, "end": 817, "text": "Hassan et al., 2018)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "A persistent challenge for researchers and site administrators in this area is the need to: (a) establish a typology of comments which are undesirable in online discussions; (b) apply this typology in a consistent and reliable manner; and (c) account for adversarial user behaviour in response to moderation. This is complicated by the fact that there is no single objective set of categories for speech which ought to be excluded in all contexts, with perceptions of undesirable speech differing across individuals, cultures, geographies, and online communities (Vidgen et al., 2019) .", "cite_spans": [ { "start": 563, "end": 584, "text": "(Vidgen et al., 2019)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Prior research on toxic comments online has found that classifiers trained on crowdsourced data can be effective at detecting the most overt forms of toxic comments. However, there remain difficulties in detecting subtler forms of toxicity which may be implicit, require idiosyncratic knowledge, familiarity with the conversation context, or familiarity with particular cultural tropes (Kohli et al., 2018; van Aken et al., 2018; Parekh and Patel, 2017) . One of the key ingredients to progress on this front will be high quality, large, annotated datasets addressing these more subtle harmful attributes, from which machine learning models will be able to learn. Unfortunately, for most subtler toxic attributes there are few available datasets (or none, particularly in many languages other than English), which is a bottleneck preventing further research (Fortuna et al., 2019) .", "cite_spans": [ { "start": 386, "end": 406, "text": "(Kohli et al., 2018;", "ref_id": "BIBREF14" }, { "start": 407, "end": 429, "text": "van Aken et al., 2018;", "ref_id": "BIBREF0" }, { "start": 430, "end": 453, "text": "Parekh and Patel, 2017)", "ref_id": "BIBREF18" }, { "start": 858, "end": 880, "text": "(Fortuna et al., 2019)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We aim to contribute to research in this area through the release of the Unhealthy Comment Corpus (UCC) of approximately 44,000 comments and corresponding crowdsourced labels and confidence scores. The labelling typology for the dataset identifies for each comment a higher-level classification of whether that comment 'has a place in a healthy online conversation', accompanied for each comment by binary labels for whether it is: (1) hostile, (2) antagonistic, insulting, provocative or trolling (together, 'antagonistic'), (3) dismissive, (4) condescending or patronising (together, 'condescending'), (5) sarcastic, and/or (6) an unfair generalisation. For each label there is also an associated confidence score (between 0.5 and 1). The UCC is open source and available on Github. 1 The UCC contributes further high quality data on attributes like sarcasm, hostility, and condescension, adding to existing datasets on these and related attributes (Wang and Potts, 2019; Davidson et al., 2017; Wulczyn et al., 2017; Chen et al., 2017) , and provides (to the best of our knowledge) the first dataset of this scale with labels for dismissiveness, unfair generalisations, antagonistic behavior, and overall assessments of whether those comments fall within 'healthy' conversation. We also make use of and illustrate the benefits of annotator trustworthiness scores when crowdsourcing labels on subjective data of this sort.", "cite_spans": [ { "start": 785, "end": 786, "text": "1", "ref_id": null }, { "start": 951, "end": 973, "text": "(Wang and Potts, 2019;", "ref_id": "BIBREF26" }, { "start": 974, "end": 996, "text": "Davidson et al., 2017;", "ref_id": "BIBREF6" }, { "start": 997, "end": 1018, "text": "Wulczyn et al., 2017;", "ref_id": "BIBREF28" }, { "start": 1019, "end": 1037, "text": "Chen et al., 2017)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "1 github.com/conversationai/unhealthy-conversations This paper is structured as follows. Section 2 outlines the motivation and background to the UCC attribute typology. Section 3 details the data collection and quality control processes. In Section 4 we present some summary statistics, benefits, and limitations of the data, and in Section 5 we present a baseline classification model for this dataset, and evaluate its performance. Section 6 highlights potential sources of bias in this dataset, and the need to be cognisant of these when conducting further research in this area .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we broadly characterise a healthy online public conversation as one where posts and comments are made in good faith, are not overly hostile or destructive, and generally invite engagement. Such a conversation may include robust engagement and debate, and is generally (though not always) focused on substance and ideas. Importantly, though, healthy contributions to online conversations are not necessarily friendly, grammatically correct, well constructed, intellectual, substantive, or even free of any vulgarity. Some harmful contributions to conversations are obviously derogatory, threatening, violent, or insulting (Anderson et al., 2018) , and these are the sorts of comments which have been the primary focus of research in algorithmic moderation assistance and related areas. However, many of those comments which deter people from engagement or create downward spirals in interactions can be more subtle (Zhang et al., 2018) . This is especially the case with conversations online, many of which (i) take place in a 'public' forum that is visible to thousands of others, and (ii) involve strangers who have never met and know little about one another (Santana, 2014). These two features of online conversations can sometimes enhance commenters' sensitivity to subtler forms of toxicity like sarcasm, condescension, or dismissiveness, amplifying their negative impact on conversations despite the fact that these attributes may be less (or not at all) harmful in other specific contexts.", "cite_spans": [ { "start": 636, "end": 659, "text": "(Anderson et al., 2018)", "ref_id": "BIBREF1" }, { "start": 929, "end": 949, "text": "(Zhang et al., 2018)", "ref_id": "BIBREF29" } ], "ref_spans": [], "eq_spans": [], "section": "From 'toxic' comments to 'unhealthy' conversation", "sec_num": "2" }, { "text": "Identifying subtle indicators of problematic online comments is a difficult task. There are at least three reasons for this. First, they are less extreme and therefore less likely to use clearly identifiable explicit or inflammatory language. Second, a substantive point might be made in an inflammatory way, or a remark may be perceived differently depending on the context, norms, and expectations of the reader. Third, there is an even greater risk of identifying 'false positives' and 'false negatives', since many of the expressions used in subtle forms of toxicity can also be deployed for positive contributions. For example, sarcasm is often used in derisive or bullying ways, but it can also be used for humour or to express a substantive, inoffensive point (Vidgen et al., 2019) .", "cite_spans": [ { "start": 767, "end": 788, "text": "(Vidgen et al., 2019)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "From 'toxic' comments to 'unhealthy' conversation", "sec_num": "2" }, { "text": "The challenge is to identify the subtle characteristics of harmful comments online despite their ambiguity, without falsely identifying healthy comments. We differentiate between two categories. The first, which is the most well studied to date, are those whose explicit intention is to insult, threaten, or abuse. The second category, are comments which engage with others, share an opinion, or contribute to the conversation, but are written in a way which is likely to antagonise, hurt, or deter others. We found these comments to be at least as prevalent in the sample data (Table 1 ). Our typology of unhealthy attributes aims to include this second category of comments, and determine whether annotators believe they belong in a healthy online conversation.", "cite_spans": [], "ref_spans": [ { "start": 578, "end": 586, "text": "(Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "From 'toxic' comments to 'unhealthy' conversation", "sec_num": "2" }, { "text": "Our hypothesis was that together these 6 attributes account for the majority of 'unhealthy' comments online, but that there will still be some comments that are 'unhealthy' but do not display any sub-attribute, and also some which are 'healthy' despite representing one or more sub-attributes (see Figure 1 ). In general, whether the presence of these attributes indicates healthy or unhealthy conversation will also depend significantly on the nature of the forum and users. Nonetheless, the combination of an abstract 'health' rating with the other 6 attributes provides a useful dataset for investigating nuanced comments, and could be used to help develop a broader range of models that are customised for specific production environments.", "cite_spans": [], "ref_spans": [ { "start": 298, "end": 306, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "From 'toxic' comments to 'unhealthy' conversation", "sec_num": "2" }, { "text": "The dataset comprises randomly chosen comments from the Globe and Mail news site (sampled from the SFU Opinion and Comment Corpus dataset) (Kolhatkar et al., 2019) , of 250 characters or less. Comment scores were crowdsourced using Figure Eight (now Appen). The annotation job consisted of 588 crowdworkers (annotators) providing 244468 judgements on 44355 comments. 2 Each annotator was asked to identify for each comment whether it was healthy and if any of the attributes were present, in the form of a standard questionnaire (see Appendix A). Annotators were not given any wider context or additional information about where a comment was posted or how it was engaged with by other users.", "cite_spans": [ { "start": 139, "end": 163, "text": "(Kolhatkar et al., 2019)", "ref_id": "BIBREF15" }, { "start": 232, "end": 238, "text": "Figure", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Source data and annotation", "sec_num": "3" }, { "text": "To both accommodate and attempt to resolve meaningful disagreement, we applied a dynamic judgement method which requests additional annotations for those comments on which there was insufficient consensus (either yes or no with a confidence of less than 75%). All comments were annotated at least three times, and more annotators were added, up to a limit of five annotators per comment until sufficient consensus was reached. Annotation Job Refinement. The inherent subtlety, subjectivity, and frequent ambiguity of the attributes covered in this dataset make crowdsourcing quality attribute labels an unavoidably difficult process.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Source data and annotation", "sec_num": "3" }, { "text": "Typically the goal in an annotation task would simply be to maximise agreement between the multiple annotators of each comment. However, when the annotation task is inherently subjective and meaningful difference of opinion is itself valuable data, the goal becomes instead to maximise common understanding of the task across annotators. This entails tailoring the phrasing of the questions put to annotators, so as to create as common an understanding as possible of what each question is really asking. This way, disagreement between annotators reflected in the dataset will represent different reasonable readings of the same comment which are themselves important to capture. In research on irony and sarcasm, for example, Filatova noted the difficulty even among expert researchers in formally defining these terms (Filatova, 2012) . For the other attributes included in this dataset which are as (if not more) ambiguous and subtle than sarcasm, we expect this to hold true as well.", "cite_spans": [ { "start": 820, "end": 836, "text": "(Filatova, 2012)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Source data and annotation", "sec_num": "3" }, { "text": "The exact wording of each question on the questionnaire went through multiple iterations, tested by smaller scale experiments to evaluate effectiveness. The quality of the resulting data was evaluated manually by our team, calculating the proportion of perceived mistaken annotations and their 'severity': to what extent a judgement was 'obviously wrong', as opposed to an understandable alternative reading of a comment.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Source data and annotation", "sec_num": "3" }, { "text": "We found that providing annotators with precise and more comprehensive definitions of each attribute was not more likely to produce interannotator agreement or better quality data. Neither, however, were best results produced by asking simple, 'yes or no' questions such as 'Is this comment dismissive?' for all attributes. The best results were achieved by relying primarily on annotators implicit understandings of and intuitions about the attributes, aided by brief inline explanations. We added explanations to avoid mistakes for those attributes which are more ambiguous, and for which our smaller tests had indicated required further guidance. These can be seen in the questionnaire included as Appendix A.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Source data and annotation", "sec_num": "3" }, { "text": "To ensure that disagreement reflects reasonable difference of opinion, rather than inattention or misunderstanding of the task, it is necessary to apply a method of quality control. The attempt to create a labeled dataset is premised on the assumption of some 'ground truth'; that it is possible for comments to have labels and confidence scores accurately representing the presence of one or more attributes to some extent. However, the extent to which a comment displays one or more attribute is subjective, and the scores would be unhelpful if they did not capture what a wider and more diverse audience than our team of authors would understand the comments to mean. Our process of quality control therefore aimed to reduce the number of 'bad' annotators, those who either do not understand or appropriately engage with the task, while still allowing for differences of opinion.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Source data and annotation", "sec_num": "3" }, { "text": "Our primary quality control mechanism was to collate a set of 'test comments', for which we had manually established the correct answers. Annotators encountered one test comment per batch of seven comments they reviewed, without knowing which of the seven was the test comment, and their running accuracy on these test comments was defined as their 'trustworthiness score'. The task required that annotators maintain a trustworthiness score of more than 78%. If an annotator dropped below this level, they were removed from the annotator pool for this task, and all of their prior annotations were discarded 3 . The removed 'bad' annotator judgements were replaced by newly collected trusted judgements as necessary.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Source data and annotation", "sec_num": "3" }, { "text": "We restricted our test comments to what were (in our view) clear and definitive examples of the attributes, such that one would fail on the test comments only if one has an incorrect understanding of what is meant by a particular attribute. In the course of our preliminary small-scale refining iterations of the questionnaire, analysis of responses revealed some recurring misunderstandings or mistakes. For example, a common error was to label all non-sarcastic humour as sarcasm, or to conflate polite disagreement with dismissiveness. As a result, we identified and included specific test comments, drawn from real examples, aimed at reducing these common errors.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Source data and annotation", "sec_num": "3" }, { "text": "We included very few test comments for the higher level question on whether a comment belongs in a healthy conversation. Any test questions on this topic were very extreme examples, such as highly abusive explicit comments, to ensure that annotators were not randomly answering that question. We had two reasons for minimising the use of test comments for this question. Firstly, since this was in our view the most open-ended question, it is difficult to establish tests on the basis of which to exclude annotators. Secondly, allowing greater annotator discretion on this question provides insight on whether there is a correlation between the six attributes and being labelled as unhealthy. 4", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Source data and annotation", "sec_num": "3" }, { "text": "The dataset comprises a total of 44355 comments labelled 'yes' or 'no' for each attribute, along with a confidence score for each label. The labels and corresponding confidence scores for each attribute are based on an aggregation of the answers given by different annotators, weighted by their respective 'trustworthiness' scores. As an example to demonstrate this process, consider a comment annotated by 5 annotators with trustworthiness scores 0.78, 0.85, 0.9, 1.0, and 0.95, who judge a comment for a particular attribute with judgements 'yes', 'yes', 'yes', 'no', 'yes' respectively. Let T be the sum of their trustworthiness scores, and T y , T n the sum of the trustworthiness scores of those who answered 'yes' and 'no' respectively. The label is then determined by which of T y or T n is larger, in this case it is T y , and the confidence score is T y /T , in this case 0.78.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The UCC dataset", "sec_num": "4" }, { "text": "The proportion of comments that contain each attribute is shown in Table 1 As the comments were sampled from the SFU Opinion and Comment Corpus dataset, the prevalence for each attribute is inevitably low. Despite the label imbalance, the dataset represents an important contribution to identification of this wider variety of subtle attributes, with thousands of positive examples for each. Our manual analysis during initial iterations of the annotation job indicated that 4 There remains a clear methodological issue with using this data for comparing the set of comments classed as 'unhealthy' with those classed as one or more of the other attributes: having been asked all questions as part of the same questionnaire, annotators may have been primed to associate the attributes with 'unhealthiness', even if they would not have done so otherwise. (Rosenblatt et al., 1956) of confidence scores for each attribute. Figure 2a shows confidence scores for those comments labelled as 'no' for each unhealthy attribute, while Figure 2b represents those of comments labelled 'yes'. these final proportions are roughly representative of the prevalence of these attributes in similar live contexts, such as North American online newspaper comment sections. There are specific attributes, notably sarcasm, for which it can be possible to collate a corpus of self-labelled data, for example by scraping tweets with '#sarcastic' from Twitter, or comments followed by '/s' on Reddit (Khodak et al., 2018) . In these specific circumstances, the avoidance of the need to crowdsource and pay for annotations can permit much larger and more balanced datasets. However, for all other attributes we consider, and in fora like the comment sections of news sites, relying on self-labelled data is not possible. For these attributes, crowdsourcing is the only feasible way to obtain high quality data, and as such we would expect proportions reflecting those observed in similar contexts.", "cite_spans": [ { "start": 475, "end": 476, "text": "4", "ref_id": null }, { "start": 853, "end": 878, "text": "(Rosenblatt et al., 1956)", "ref_id": "BIBREF20" }, { "start": 1476, "end": 1497, "text": "(Khodak et al., 2018)", "ref_id": "BIBREF13" } ], "ref_spans": [ { "start": 67, "end": 74, "text": "Table 1", "ref_id": "TABREF1" }, { "start": 920, "end": 929, "text": "Figure 2a", "ref_id": null }, { "start": 1026, "end": 1035, "text": "Figure 2b", "ref_id": null } ], "eq_spans": [], "section": "The UCC dataset", "sec_num": "4" }, { "text": "Inspection of random subsets of the new UCC dataset reveals that the data is generally of a high quality, and captures important nuances, accurately identifying these subtle attributes, both when they overlap (as is common), and also when they do not (see Figure 3 for examples). Figure 4 shows the correlations between attributes, calculated based on the pool of comments which are labelled as one or more of the six unhealthy attributes. The figure highlights two important facts. First, the relatively low correlation between most attributes indicates that the dataset succeeds in differentiating between these different types of subtle unhealthy attributes. As expected, there is significant correlation between antagonistic and hostile comments. There is some correlation between the often more subtle attributes like dismissiveness/condescension and antagonism, while these are less correlated with hostility. We also include correlations with the 'toxicity' scores produced by Jigsaw's Perspective API (perspectiveapi.com), which again confirms that our attributes, in particular those other than antagonistic and hostile, capture something distinct from overt toxicity. A notable feature of Figure 4 is the slightly negative correlations between sarcasm and other attributes, indicating that annotators generally did not associate sarcasm with other unhealthy attributes. Secondly, 'unhealthy' correlates significantly with antagonism and hostility, but very little with the other attributes, indicating a fairly broad general notion of healthy conversation on the part of the annotators, which mostly includes dismissive, condescending, sarcastic and generalising comments. Despite its generally high quality, the nature of the task and the annotation method entails some level of noise in the dataset. This noise is particularly difficult to quantify given the need to distinguish between different but reasonable interpretations of a comment, and simply incorrect annotations caused by a lack of understanding or care on the part of an annotator (for example, one comment reading \"You are an ignorant * sshole\" was judged not to be needlessly hostile, an obvious error).", "cite_spans": [], "ref_spans": [ { "start": 256, "end": 264, "text": "Figure 3", "ref_id": "FIGREF3" }, { "start": 280, "end": 288, "text": "Figure 4", "ref_id": "FIGREF4" }, { "start": 1199, "end": 1207, "text": "Figure 4", "ref_id": "FIGREF4" } ], "eq_spans": [], "section": "The UCC dataset", "sec_num": "4" }, { "text": "This highlights the difficulties of using traditional reliability metrics like Krippendorff's \u03b1 for crowdsourced annotations on subjective tasks (D'Arcey et al., 2019). Krippendorff's \u03b1 is a number between 0 and 1 intended to indicate the extent to which annotators agree compared with what would have happened if they guessed randomly. The base assumption then is that all disagreement between annotators decreases reliability, which is not necessarily the case for subjective attributes (Salminen et al., 2018b; Swanson et al., 2014) .", "cite_spans": [ { "start": 489, "end": 513, "text": "(Salminen et al., 2018b;", "ref_id": "BIBREF22" }, { "start": 514, "end": 535, "text": "Swanson et al., 2014)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "The UCC dataset", "sec_num": "4" }, { "text": "Despite the above caveat, we conduct analysis using Krippendorff's \u03b1 (K-\u03b1) for two reasons. Firstly, to allow for comparison with other literature in the field, we report the K-\u03b1 for judgements on each attribute in Table 2 . They range from 0.31 -0.39, which is comparable with other datasets labelling 'similar' phenomenon, such as sarcasm (0.24-0.38) (Swanson et al., 2014; Justo et al., 2018; D'Arcey et al., 2019) , and hate speech with sub-attributes from Figure Eight annotators (0.21) (Lazaridou et al., 2020) . The one exception is the set of judgements on whether a comment has a place in a healthy conversation, with a lower K-\u03b1 of 0.26. Given that this is a more open-ended question, this is not necessarily surprising. Secondly, to the extent that K-\u03b1 is an important reliability metric for this form of data, it supports our use of 'trustworthiness' scores when aggregating judgements on a given comment to decide labels and confidence scores. Specifically, as shown in Figure 5 , we see that as we increase the trustworthiness threshold for annotators whose judgements are included, the resulting K-\u03b1 steadily increase. This provides some indication that our trustworthiness scores do capture the reliability of our annotators, and thus that their judgements ought to be weighted more highly in the final confidence in a comment's labels.", "cite_spans": [ { "start": 353, "end": 375, "text": "(Swanson et al., 2014;", "ref_id": "BIBREF24" }, { "start": 376, "end": 395, "text": "Justo et al., 2018;", "ref_id": "BIBREF12" }, { "start": 396, "end": 417, "text": "D'Arcey et al., 2019)", "ref_id": "BIBREF5" }, { "start": 492, "end": 516, "text": "(Lazaridou et al., 2020)", "ref_id": "BIBREF16" } ], "ref_spans": [ { "start": 215, "end": 222, "text": "Table 2", "ref_id": "TABREF3" }, { "start": 461, "end": 491, "text": "Figure Eight annotators (0.21)", "ref_id": "FIGREF0" }, { "start": 983, "end": 991, "text": "Figure 5", "ref_id": "FIGREF5" } ], "eq_spans": [], "section": "The UCC dataset", "sec_num": "4" }, { "text": "Also included in the UCC dataset are the individual annotations for each comment by all 'trusted' annotators. Users of the data may therefore apply any alternative trustworthiness threshold, or use a preferred aggregation method to derive labels.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Attribute", "sec_num": null }, { "text": "Use of a pre-trained BERT model (Devlin et al., 2019) and fine-tuning on this dataset produces classifiers with modest performance (Figure 6 ), compared to the state of the art for sequence classification. The best performing attributes, 'hostile' and 'antagonistic' are also those most similar to the types of attributes typically annotated in comment classification work. The other attributes seem to cluster together, with the 'sarcastic' label particularly noteworthy for its low performance.", "cite_spans": [ { "start": 32, "end": 53, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF7" } ], "ref_spans": [ { "start": 131, "end": 140, "text": "(Figure 6", "ref_id": "FIGREF6" } ], "eq_spans": [], "section": "Models and results", "sec_num": "5" }, { "text": "To give context to the model performance, we follow (Wulczyn et al., 2017) and compare our performance with human workers. For each comment, we hold out one annotator to act as our 'human model' and use the aggregated score of the other annotators as the ground truth to compute the ROC AUC. To stabilize our results, this procedure is repeated five times and the average reported. We use the same test sets to compute the ROC AUC of the trained BERT model and average those scores as well. As we can see, for all attributes other than 'sarcastic' the BERT model outperforms a randomly selected human annotator, indicating that it has sufficiently captured the semantic and syntactic structures for these attributes. For 'sarcastic', the gap between the BERT model and human annotators indicates a rich area for studying whether model performance can be improved.", "cite_spans": [ { "start": 52, "end": 74, "text": "(Wulczyn et al., 2017)", "ref_id": "BIBREF28" } ], "ref_spans": [], "eq_spans": [], "section": "Models and results", "sec_num": "5" }, { "text": "One further challenge which comes with annotating more subtle unhealthy attributes is the potential to encode unintended societal biases and value judgements in models trained on this data. For example,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Potential Unintended Biases", "sec_num": "6" }, { "text": "Human sarcasm is often communicated by stating something which the author presumes to be so obviously untrue that it will be read as sarcastic. These presumptions reflect the author's biases -or in the cases of comment annotation, labelling comments as sarcastic reflects the annotators beliefs of what is obviously untrue.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Attribute", "sec_num": null }, { "text": "AUC BERT", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Attribute", "sec_num": null }, { "text": "With the comment corpus being in English, and given the subtlety of the attributes, higher quality annotations were likely to be achieved by annotators with first-language proficiency in English. The best proxy for this available on the Figure Eight platform was to restrict the country of origin of our annotators to a limited subset of countries with a large English-speaking population (as either an official language or primary second language), in particular: the United States, the United Kingdom, South Africa, Sweden, New Zealand, Norway, Netherlands, Denmark, Canada, and Australia. Although our early iterations of this annotation job indicated a significant reduction in annotators failing test comments once this was enforced, this introduces a clear cultural and geographic bias. For example, the comment 'Iran and Turkey are the BEST places to be a woman!', was scored as sarcastic with 72% confidence by the annotators. Finding this comment sarcastic relies on an assumption by the annotators (of which the pool excludes residents of Iran and Turkey) that Iran and Turkey are clearly not the best places to be women. Our annotators were not selected as broadly representative across language, geography, culture, or other attributes and this assumption is not universal. While important research has begun to explore the composition of the global crowd workforce, it remains difficult to select for annotators representative of specific characteristics on crowd work platforms (Posch et al., 2018) . In the current version of the Appen platform, unless annotators are asked standalone questions on demographics, the only available de-tails are the annotators' country and/or city (and even then, only for some annotators). Research and modelling based on this dataset, and similar datasets, requires the exercise of great care in mitigating biases produced by the underlying data collection. This potential selection bias is likely to be evident across the broader healthy/unhealthy categorisation along with each of the attributes. Prior research has found substantial disagreement on subtle attributes of speech both among individuals and across geographies (Salminen et al., 2018a) .", "cite_spans": [ { "start": 1493, "end": 1513, "text": "(Posch et al., 2018)", "ref_id": "BIBREF19" }, { "start": 2176, "end": 2200, "text": "(Salminen et al., 2018a)", "ref_id": "BIBREF21" } ], "ref_spans": [ { "start": 237, "end": 259, "text": "Figure Eight platform", "ref_id": null } ], "eq_spans": [], "section": "Attribute", "sec_num": null }, { "text": "Finally, the source of the comments and their manner of presentation could introduce bias into the dataset. The source data is solely from a Canadian online newspaper comment section and comments were presented in isolation to annotators, without the surrounding context of the news article and other comments. Annotators were also provided with the standard questionnaire (Appendix A), which includes high level descriptions of the attributes that may not generalise across cultures. There is a substantial body of research demonstrating the potential impact of introducing biased datasets, and Vidgen et al. (Vidgen et al., 2019) note that public datasets in this area are prone to systematic bias and mislabelling, with interannotator agreement typically low for complex multi-class tasks of this kind. These challenges are to be expected in a relatively new field which aims to improve on human baseline moderation for highly subjective characteristics of online discussion. At this early stage of research, we must be mindful of addressing these biases and cognisant that the manner in which this data is collected can have critical impacts on users in a production environment. It is important to note at this stage of the field in general, and with our understanding of this dataset in particular, that the UCC dataset is not designed to train models which are immediately available for automated moderation without human intervention in a live online setting. As the field develops further, initial use-cases may include less interventionist 'nudges' or reminders of how a comment could be perceived by a reader to assist participants in discussions online.", "cite_spans": [ { "start": 610, "end": 631, "text": "(Vidgen et al., 2019)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "Attribute", "sec_num": null }, { "text": "We introduced a new corpus of labelled comments and a typology for some of the more subtle aspects of unhealthy online conversation. Our typology provides 6 sub-attributes of typically unhealthy con-tributions, and confidence scores for the labels. We described the process and challenges in creating such a dataset, and provided statistics to convey the scale of data. In particular, we note that although there is a substantial body of research on more extreme forms of negative contributions, such as toxicity, the subtler forms of unhealthy comments in our typology are often similarly prevalent online. Our analysis also shows that the sub-attributes are largely independent from overt toxicity, and mostly correlated with unhealthy contributions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions and Further work", "sec_num": "7" }, { "text": "We also provide results from a modern baseline ML model (fine tuning BERT) and note that performance exceeds that of a crowd-worker. This suggests that further work could also be done to collect a larger corpus of annotations to improve the capacity to measure models in this domain. While this dataset provides a new contribution in gathering the 6 attributes under the umbrella of an 'unhealthy' conversation, there also remains an open question as to how exhaustive this typology of unhealthy contributions is. Future research and annotation work could further refine the typology, amend the standard questionnaire, or apply it to forums which differ in cultural and geographic context.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions and Further work", "sec_num": "7" }, { "text": "Further work also includes exploring the unintended biases in the model and data. This dataset is well-placed to further explore early signs of conversations going awry (Zhang et al., 2018) , while models based on the data could be explored to provide assistance to moderating online conversations.", "cite_spans": [ { "start": 169, "end": 189, "text": "(Zhang et al., 2018)", "ref_id": "BIBREF29" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusions and Further work", "sec_num": "7" }, { "text": "According to statistics provided by Appen, the average time spent on those annotations which were included in the final dataset was between 12 and 13 seconds per comment.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "This was a threshold selected through initial test jobs, to balance budget and quality considerations. A higher threshold yields more trustworthy annotations, but consequently discards more existing data when annotators drop below that threshold.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "A Annotator Questionnaire", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "annex", "sec_num": null }, { "text": "In this job, you will be asked to read a comment and to express an overall opinion about whether you think it has a place in a healthy conversation online.You will also be asked to identify whether it displays a range of characteristics that may lead to unhealthy conversations. These characteristics include: sarcasm, gross generalisations, hostility, aggression, dismissiveness, condescension and patronization.All of the comments you will see are real comments posted by users in online conversations. Most of them will have been posted in response to one or more comments made by others (which you are not given). However, the questions are designed in such a way that you should be able to answer them without seeing these other comments.The data collected here will be used to help build tools which promote healthier conversations online.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Overview", "sec_num": null }, { "text": "\u2022 Please bear in mind that the questions do not ask whether you agree or disagree with the substance of each comment. Do your best to ignore your own opinion on the substantive idea or claim made in the comment when answering the questions.\u2022 Please be sure to read the full text of the comment before answering the questions. Sometimes the part of a comment which displays one or more of the attributes you will be asked about, appears close to the end of the comment.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Note:", "sec_num": null }, { "text": "What are the characteristics of a healthy conversation?\u2022 6. Is the intention of this comment to insult, antagonize, provoke, or troll other users? 7. A comment with a condescending or patronising tone will generally assume an attitude of superiority, and imply that the other commenter(s) is ignorant, child-like, naive, or unintelligent. Such comments will usually imply that the other commenter shouldn't be taken seriously.Is this comment condescending and/or patronising?8. A comment is dismissive if it rejects or ridicules another comment without good reason, or tries to push another commenter and their ideas out of the conversations. Note: A comment which expresses disagreement is not necessarily dismissive.Is this comment dismissive?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Healthy Online Conversations:", "sec_num": "1." } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Challenges for toxic comment classification: An in-depth error analysis", "authors": [ { "first": "Julian", "middle": [], "last": "Betty Van Aken", "suffix": "" }, { "first": "Ralf", "middle": [], "last": "Risch", "suffix": "" }, { "first": "Alexander", "middle": [], "last": "Krestel", "suffix": "" }, { "first": "", "middle": [], "last": "L\u00f6ser", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2nd Workshop on Abusive Language Online (ALW2)", "volume": "", "issue": "", "pages": "33--42", "other_ids": {}, "num": null, "urls": [], "raw_text": "Betty van Aken, Julian Risch, Ralf Krestel, and Alexan- der L\u00f6ser. 2018. Challenges for toxic comment clas- sification: An in-depth error analysis. In Proceed- ings of the 2nd Workshop on Abusive Language On- line (ALW2), pages 33-42.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Toxic talk: How online incivility can undermine perceptions of media", "authors": [ { "first": "Sara", "middle": [ "K" ], "last": "Ashley A Anderson", "suffix": "" }, { "first": "Dominique", "middle": [], "last": "Yeo", "suffix": "" }, { "first": "", "middle": [], "last": "Brossard", "suffix": "" }, { "first": "A", "middle": [], "last": "Dietram", "suffix": "" }, { "first": "Michael A", "middle": [], "last": "Scheufele", "suffix": "" }, { "first": "", "middle": [], "last": "Xenos", "suffix": "" } ], "year": 2018, "venue": "International Journal of Public Opinion Research", "volume": "30", "issue": "1", "pages": "156--168", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ashley A Anderson, Sara K Yeo, Dominique Brossard, Dietram A Scheufele, and Michael A Xenos. 2018. Toxic talk: How online incivility can undermine per- ceptions of media. International Journal of Public Opinion Research, 30(1):156-168.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Cyberbullying: What does research tell us? Theory into practice", "authors": [ { "first": "Sheri", "middle": [], "last": "Bauman", "suffix": "" } ], "year": 2013, "venue": "", "volume": "52", "issue": "", "pages": "249--256", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sheri Bauman. 2013. Cyberbullying: What does re- search tell us? Theory into practice, 52(4):249-256.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Stfu noob! predicting crowdsourced decisions on toxic behavior in online games", "authors": [ { "first": "Jeremy", "middle": [], "last": "Blackburn", "suffix": "" }, { "first": "Haewoon", "middle": [], "last": "Kwak", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 23rd international conference on World wide web", "volume": "", "issue": "", "pages": "877--888", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jeremy Blackburn and Haewoon Kwak. 2014. Stfu noob! predicting crowdsourced decisions on toxic behavior in online games. In Proceedings of the 23rd international conference on World wide web, pages 877-888.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Presenting a labelled dataset for real-time detection of abusive user posts", "authors": [ { "first": "Hao", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Susan", "middle": [], "last": "Mckeever", "suffix": "" }, { "first": "Sarah", "middle": [ "Jane" ], "last": "Delany", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the International Conference on Web Intelligence", "volume": "", "issue": "", "pages": "884--890", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hao Chen, Susan Mckeever, and Sarah Jane Delany. 2017. Presenting a labelled dataset for real-time de- tection of abusive user posts. In Proceedings of the International Conference on Web Intelligence, pages 884-890.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Wait signals predict sarcasm in online debates", "authors": [ { "first": "J", "middle": [], "last": "Trevor D'arcey", "suffix": "" }, { "first": "Shereen", "middle": [], "last": "Oraby", "suffix": "" }, { "first": "Jean E Fox", "middle": [], "last": "Tree", "suffix": "" } ], "year": 2019, "venue": "Dialogue & Discourse", "volume": "10", "issue": "2", "pages": "56--78", "other_ids": {}, "num": null, "urls": [], "raw_text": "J Trevor D'Arcey, Shereen Oraby, and Jean E Fox Tree. 2019. Wait signals predict sarcasm in online debates. Dialogue & Discourse, 10(2):56-78.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Automated hate speech detection and the problem of offensive language", "authors": [ { "first": "Thomas", "middle": [], "last": "Davidson", "suffix": "" }, { "first": "Dana", "middle": [], "last": "Warmsley", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Macy", "suffix": "" }, { "first": "Ingmar", "middle": [], "last": "Weber", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the Eleventh International AAAI Conference on Web and Social Media", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thomas Davidson, Dana Warmsley, Michael Macy, and Ingmar Weber. 2017. Automated hate speech detection and the problem of offensive language. In Proceedings of the Eleventh International AAAI Conference on Web and Social Media.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Bert: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "J", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2019, "venue": "NAACL-HLT", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirec- tional transformers for language understanding. In NAACL-HLT.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Measuring and mitigating unintended bias in text classification", "authors": [ { "first": "Lucas", "middle": [], "last": "Dixon", "suffix": "" }, { "first": "John", "middle": [], "last": "Li", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Sorensen", "suffix": "" }, { "first": "Nithum", "middle": [], "last": "Thain", "suffix": "" }, { "first": "Lucy", "middle": [], "last": "Vasserman", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society", "volume": "", "issue": "", "pages": "67--73", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lucas Dixon, John Li, Jeffrey Sorensen, Nithum Thain, and Lucy Vasserman. 2018. Measuring and mitigat- ing unintended bias in text classification. In Pro- ceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society, pages 67-73.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Irony and sarcasm: Corpus generation and analysis using crowdsourcing", "authors": [ { "first": "Elena", "middle": [], "last": "Filatova", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC 2012)", "volume": "", "issue": "", "pages": "392--398", "other_ids": {}, "num": null, "urls": [], "raw_text": "Elena Filatova. 2012. Irony and sarcasm: Corpus gen- eration and analysis using crowdsourcing. In Pro- ceedings of the Eighth International Conference on Language Resources and Evaluation (LREC 2012), pages 392-398.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "A hierarchically-labeled portuguese hate speech dataset", "authors": [ { "first": "Paula", "middle": [], "last": "Fortuna", "suffix": "" }, { "first": "Joao", "middle": [], "last": "Rocha Da", "suffix": "" }, { "first": "Leo", "middle": [], "last": "Silva", "suffix": "" }, { "first": "S\u00e9rgio", "middle": [], "last": "Wanner", "suffix": "" }, { "first": "", "middle": [], "last": "Nunes", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the Third Workshop on Abusive Language Online", "volume": "", "issue": "", "pages": "94--104", "other_ids": {}, "num": null, "urls": [], "raw_text": "Paula Fortuna, Joao Rocha da Silva, Leo Wanner, S\u00e9rgio Nunes, et al. 2019. A hierarchically-labeled portuguese hate speech dataset. In Proceedings of the Third Workshop on Abusive Language Online, pages 94-104.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Exposure to extremist online content could lead to violent radicalization: A systematic review of empirical evidence", "authors": [ { "first": "Ghayda", "middle": [], "last": "Hassan", "suffix": "" }, { "first": "S\u00e9bastien", "middle": [], "last": "Brouillette-Alarie", "suffix": "" }, { "first": "S\u00e9raphin", "middle": [], "last": "Alava", "suffix": "" }, { "first": "Divina", "middle": [], "last": "Frau-Meigs", "suffix": "" }, { "first": "Lysiane", "middle": [], "last": "Lavoie", "suffix": "" }, { "first": "Arber", "middle": [], "last": "Fetiu", "suffix": "" }, { "first": "Wynnpaul", "middle": [], "last": "Varela", "suffix": "" }, { "first": "Evgueni", "middle": [], "last": "Borokhovski", "suffix": "" }, { "first": "Vivek", "middle": [], "last": "Venkatesh", "suffix": "" }, { "first": "C\u00e9cile", "middle": [], "last": "Rousseau", "suffix": "" } ], "year": 2018, "venue": "International journal of developmental science", "volume": "12", "issue": "1-2", "pages": "71--88", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ghayda Hassan, S\u00e9bastien Brouillette-Alarie, S\u00e9raphin Alava, Divina Frau-Meigs, Lysiane Lavoie, Ar- ber Fetiu, Wynnpaul Varela, Evgueni Borokhovski, Vivek Venkatesh, C\u00e9cile Rousseau, et al. 2018. Ex- posure to extremist online content could lead to vi- olent radicalization: A systematic review of empiri- cal evidence. International journal of developmen- tal science, 12(1-2):71-88.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Detection of sarcasm and nastiness: new resources for spanish language", "authors": [ { "first": "Raquel", "middle": [], "last": "Justo", "suffix": "" }, { "first": ", M In\u00e9s", "middle": [], "last": "Jos\u00e9 M Alcaide", "suffix": "" }, { "first": "Marilyn", "middle": [], "last": "Torres", "suffix": "" }, { "first": "", "middle": [], "last": "Walker", "suffix": "" } ], "year": 2018, "venue": "Cognitive Computation", "volume": "10", "issue": "6", "pages": "1135--1151", "other_ids": {}, "num": null, "urls": [], "raw_text": "Raquel Justo, Jos\u00e9 M Alcaide, M In\u00e9s Torres, and Mar- ilyn Walker. 2018. Detection of sarcasm and nasti- ness: new resources for spanish language. Cognitive Computation, 10(6):1135-1151.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "A large self-annotated corpus for sarcasm", "authors": [ { "first": "Mikhail", "middle": [], "last": "Khodak", "suffix": "" }, { "first": "Nikunj", "middle": [], "last": "Saunshi", "suffix": "" }, { "first": "Kiran", "middle": [], "last": "Vodrahalli", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the Eleventh International Conference on Language Resources and Evaluation", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mikhail Khodak, Nikunj Saunshi, and Kiran Vodrahalli. 2018. A large self-annotated corpus for sarcasm. In Proceedings of the Eleventh International Confer- ence on Language Resources and Evaluation (LREC 2018).", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Paying attention to toxic comments online", "authors": [ { "first": "Manav", "middle": [], "last": "Kohli", "suffix": "" }, { "first": "Emily", "middle": [], "last": "Kuehler", "suffix": "" }, { "first": "John", "middle": [], "last": "Palowitch", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Manav Kohli, Emily Kuehler, and John Palowitch. 2018. Paying attention to toxic comments online. Web: https://stanford.io/2YfKMvE.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "The sfu opinion and comments corpus: A corpus for the analysis of online news comments", "authors": [ { "first": "Varada", "middle": [], "last": "Kolhatkar", "suffix": "" }, { "first": "Hanhan", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Luca", "middle": [], "last": "Cavasso", "suffix": "" }, { "first": "Emilie", "middle": [], "last": "Francis", "suffix": "" }, { "first": "Kavan", "middle": [], "last": "Shukla", "suffix": "" }, { "first": "Maite", "middle": [], "last": "Taboada", "suffix": "" } ], "year": 2019, "venue": "Corpus Pragmatics", "volume": "", "issue": "", "pages": "1--36", "other_ids": {}, "num": null, "urls": [], "raw_text": "Varada Kolhatkar, Hanhan Wu, Luca Cavasso, Emilie Francis, Kavan Shukla, and Maite Taboada. 2019. The sfu opinion and comments corpus: A corpus for the analysis of online news comments. Corpus Prag- matics, pages 1-36.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Discovering biased news articles leveraging multiple human annotations", "authors": [ { "first": "Konstantina", "middle": [], "last": "Lazaridou", "suffix": "" }, { "first": "Alexander", "middle": [], "last": "L\u00f6ser", "suffix": "" }, { "first": "Maria", "middle": [], "last": "Mestre", "suffix": "" }, { "first": "Felix", "middle": [], "last": "Naumann", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 12th Conference on Language Resources and Evaluation", "volume": "", "issue": "", "pages": "1268--1277", "other_ids": {}, "num": null, "urls": [], "raw_text": "Konstantina Lazaridou, Alexander L\u00f6ser, Maria Mestre, and Felix Naumann. 2020. Discovering bi- ased news articles leveraging multiple human anno- tations. In Proceedings of the 12th Conference on Language Resources and Evaluation, pages 1268- 1277.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "L-hsab: a levantine twitter dataset for hate speech and abusive language", "authors": [ { "first": "Hala", "middle": [], "last": "Mulki", "suffix": "" }, { "first": "Hatem", "middle": [], "last": "Haddad", "suffix": "" }, { "first": "Chedi", "middle": [], "last": "Bechikh Ali", "suffix": "" }, { "first": "Halima", "middle": [], "last": "Alshabani", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the Third Workshop on Abusive Language Online", "volume": "", "issue": "", "pages": "111--118", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hala Mulki, Hatem Haddad, Chedi Bechikh Ali, and Halima Alshabani. 2019. L-hsab: a levantine twit- ter dataset for hate speech and abusive language. In Proceedings of the Third Workshop on Abusive Lan- guage Online, pages 111-118.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Toxic comment tools: A case study", "authors": [ { "first": "Pooja", "middle": [], "last": "Parekh", "suffix": "" }, { "first": "Hetal", "middle": [], "last": "Patel", "suffix": "" } ], "year": 2017, "venue": "International Journal of Advanced Research in Computer Science", "volume": "8", "issue": "5", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pooja Parekh and Hetal Patel. 2017. Toxic comment tools: A case study. International Journal of Ad- vanced Research in Computer Science, 8(5).", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Characterizing the global crowd workforce: A cross-country comparison of crowdworker demographics", "authors": [ { "first": "Lisa", "middle": [], "last": "Posch", "suffix": "" }, { "first": "Arnim", "middle": [], "last": "Bleier", "suffix": "" }, { "first": "Fabian", "middle": [], "last": "Fl\u00f6ck", "suffix": "" }, { "first": "Markus", "middle": [], "last": "Strohmaier", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lisa Posch, Arnim Bleier, Fabian Fl\u00f6ck, and Markus Strohmaier. 2018. Characterizing the global crowd workforce: A cross-country comparison of crowd- worker demographics.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Remarks on some nonparametric estimates of a density function", "authors": [ { "first": "Murray", "middle": [], "last": "Rosenblatt", "suffix": "" } ], "year": 1956, "venue": "The Annals of Mathematical Statistics", "volume": "27", "issue": "3", "pages": "832--837", "other_ids": {}, "num": null, "urls": [], "raw_text": "Murray Rosenblatt et al. 1956. Remarks on some non- parametric estimates of a density function. The An- nals of Mathematical Statistics, 27(3):832-837.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Online hate interpretation varies by country, but more by individual: A statistical analysis using crowdsourced ratings", "authors": [ { "first": "Joni", "middle": [], "last": "Salminen", "suffix": "" }, { "first": "Fabio", "middle": [], "last": "Veronesi", "suffix": "" }, { "first": "Hind", "middle": [], "last": "Almerekhi", "suffix": "" }, { "first": "Soon-Gvo", "middle": [], "last": "Jung", "suffix": "" }, { "first": "Bernard J", "middle": [], "last": "Jansen", "suffix": "" } ], "year": 2018, "venue": "2018 Fifth International Conference on Social Networks Analysis, Management and Security (SNAMS)", "volume": "", "issue": "", "pages": "88--94", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joni Salminen, Fabio Veronesi, Hind Almerekhi, Soon- Gvo Jung, and Bernard J Jansen. 2018a. Online hate interpretation varies by country, but more by indi- vidual: A statistical analysis using crowdsourced rat- ings. In 2018 Fifth International Conference on So- cial Networks Analysis, Management and Security (SNAMS), pages 88-94. IEEE.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Inter-rater agreement for social computing studies", "authors": [ { "first": "Joni", "middle": [ "O" ], "last": "Salminen", "suffix": "" }, { "first": "A", "middle": [], "last": "Hind", "suffix": "" }, { "first": "Partha", "middle": [], "last": "Al-Merekhi", "suffix": "" }, { "first": "Bernard", "middle": [ "James" ], "last": "Dey", "suffix": "" }, { "first": "", "middle": [], "last": "Jansen", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 5th International Conference on Social Networks Analysis, Management and Security", "volume": "", "issue": "", "pages": "80--87", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joni O. Salminen, Hind A. Al-Merekhi, Partha Dey, , and Bernard James Jansen. 2018b. Inter-rater agree- ment for social computing studies. In Proceedings of the 5th International Conference on Social Net- works Analysis, Management and Security, pages 80-87.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Virtuous or vitriolic: The effect of anonymity on civility in online newspaper reader comment boards", "authors": [ { "first": "D", "middle": [], "last": "Arthur", "suffix": "" }, { "first": "", "middle": [], "last": "Santana", "suffix": "" } ], "year": 2014, "venue": "Journalism practice", "volume": "8", "issue": "1", "pages": "18--33", "other_ids": {}, "num": null, "urls": [], "raw_text": "Arthur D Santana. 2014. Virtuous or vitriolic: The effect of anonymity on civility in online newspa- per reader comment boards. Journalism practice, 8(1):18-33.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Getting reliable annotations for sarcasm in online dialogues", "authors": [ { "first": "Reid", "middle": [], "last": "Swanson", "suffix": "" }, { "first": "Stephanie", "middle": [], "last": "Lukin", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Eisenberg", "suffix": "" }, { "first": "Thomas", "middle": [], "last": "Corcoran", "suffix": "" }, { "first": "Marilyn", "middle": [], "last": "Walker", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14)", "volume": "", "issue": "", "pages": "4250--4257", "other_ids": {}, "num": null, "urls": [], "raw_text": "Reid Swanson, Stephanie Lukin, Luke Eisenberg, Thomas Corcoran, and Marilyn Walker. 2014. Get- ting reliable annotations for sarcasm in online dia- logues. In Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14), pages 4250-4257.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Challenges and frontiers in abusive content detection", "authors": [ { "first": "Bertie", "middle": [], "last": "Vidgen", "suffix": "" }, { "first": "Alex", "middle": [], "last": "Harris", "suffix": "" }, { "first": "Dong", "middle": [], "last": "Nguyen", "suffix": "" }, { "first": "Rebekah", "middle": [], "last": "Tromble", "suffix": "" }, { "first": "Scott", "middle": [], "last": "Hale", "suffix": "" }, { "first": "Helen", "middle": [], "last": "Margetts", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the Third Workshop on Abusive Language Online", "volume": "", "issue": "", "pages": "80--93", "other_ids": { "DOI": [ "10.18653/v1/W19-3509" ] }, "num": null, "urls": [], "raw_text": "Bertie Vidgen, Alex Harris, Dong Nguyen, Rebekah Tromble, Scott Hale, and Helen Margetts. 2019. Challenges and frontiers in abusive content detec- tion. In Proceedings of the Third Workshop on Abu- sive Language Online, pages 80-93, Florence, Italy. Association for Computational Linguistics.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Talkdown: A corpus for condescension detection in context", "authors": [ { "first": "Zijian", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Potts", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "3702--3710", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zijian Wang and Christopher Potts. 2019. Talkdown: A corpus for condescension detection in context. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Lan- guage Processing (EMNLP-IJCNLP), pages 3702- 3710.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Online aggression from a sociological perspective: An integrative view on determinants and possible countermeasures", "authors": [ { "first": "Sebastian", "middle": [], "last": "Weingartner", "suffix": "" }, { "first": "Lea", "middle": [], "last": "Stahel", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the Third Workshop on Abusive Language Online", "volume": "", "issue": "", "pages": "181--187", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sebastian Weingartner and Lea Stahel. 2019. Online aggression from a sociological perspective: An inte- grative view on determinants and possible counter- measures. In Proceedings of the Third Workshop on Abusive Language Online, pages 181-187.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Ex machina: Personal attacks seen at scale", "authors": [ { "first": "Ellery", "middle": [], "last": "Wulczyn", "suffix": "" }, { "first": "Nithum", "middle": [], "last": "Thain", "suffix": "" }, { "first": "Lucas", "middle": [], "last": "Dixon", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 26th International Conference on World Wide Web", "volume": "", "issue": "", "pages": "1391--1399", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ellery Wulczyn, Nithum Thain, and Lucas Dixon. 2017. Ex machina: Personal attacks seen at scale. In Pro- ceedings of the 26th International Conference on World Wide Web, pages 1391-1399.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Conversations gone awry: Detecting early signs of conversational failure", "authors": [ { "first": "Justine", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Jonathan", "middle": [ "P" ], "last": "Chang", "suffix": "" }, { "first": "Cristian", "middle": [], "last": "Danescu-Niculescu-Mizil", "suffix": "" }, { "first": "Lucas", "middle": [], "last": "Dixon", "suffix": "" }, { "first": "Yiqing", "middle": [], "last": "Hua", "suffix": "" }, { "first": "Nithum", "middle": [], "last": "Tahin", "suffix": "" }, { "first": "Dario", "middle": [], "last": "Taraborelli", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Justine Zhang, Jonathan P Chang, Cristian Danescu- Niculescu-Mizil, Lucas Dixon, Yiqing Hua, Nithum Tahin, and Dario Taraborelli. 2018. Conversations gone awry: Detecting early signs of conversational failure. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics., volume 1.", "links": null } }, "ref_entries": { "FIGREF0": { "text": "A visualisation of the proposed typology of unhealthy online comments. The grey pentagon represents unhealthy comments. Note that in this figure, 'hostile' and 'antagonistic' are represented jointly as 'hostile'.", "type_str": "figure", "num": null, "uris": null }, "FIGREF1": { "text": "Figure 2: Density estimation (Rosenblatt et al., 1956) of confidence scores for each attribute. Figure 2a shows confidence scores for those comments labelled as 'no' for each unhealthy attribute, while Figure 2b represents those of comments labelled 'yes'.", "type_str": "figure", "num": null, "uris": null }, "FIGREF2": { "text": "(a) A distinction between a hostile comment and one which intends to insult, antagonize, provoke or troll other users.(b) Subtle condescension (c) Implicit yet clear dismissiveness.", "type_str": "figure", "num": null, "uris": null }, "FIGREF3": { "text": "Examples of subtleties correctly picked up by annotators, with confidence scores shown in brackets alongside the resultant label.", "type_str": "figure", "num": null, "uris": null }, "FIGREF4": { "text": "Inter-attribute correlations, including with 'toxicity' as scored by Perspective API.", "type_str": "figure", "num": null, "uris": null }, "FIGREF5": { "text": "Krippendorff's \u03b1 for various threshold levels of annotator trustworthiness.", "type_str": "figure", "num": null, "uris": null }, "FIGREF6": { "text": "Receiver operating characteristic curves and AUC for class each attribute.", "type_str": "figure", "num": null, "uris": null }, "TABREF0": { "text": "and the confidence distributions are shown inFigure 2.", "num": null, "content": "
AttributeProportion
Antagonistic/Insulting/Trolling4.7 %
Condescending/Patronising5.5%
Dismissive3.1%
(Unfair) Generalisation2%
Hostile2.5%
Sarcastic4.3%
Unhealthy7.5%
", "html": null, "type_str": "table" }, "TABREF1": { "text": "", "num": null, "content": "", "html": null, "type_str": "table" }, "TABREF3": { "text": "", "num": null, "content": "
", "html": null, "type_str": "table" }, "TABREF5": { "text": "Comparing Human and BERT performance", "num": null, "content": "
", "html": null, "type_str": "table" } } } }