ACL-OCL / Base_JSON /prefixA /json /alw /2020.alw-1.13.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
raw
history blame contribute delete
No virus
141 kB
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T02:11:52.452380Z"
},
"title": "Countering hate on social media: Large-scale classification of hate and counter speech",
"authors": [
{
"first": "Joshua",
"middle": [],
"last": "Garland",
"suffix": "",
"affiliation": {},
"email": "joshua@santafe.edu"
},
{
"first": "Keyan",
"middle": [],
"last": "Ghazi-Zahedi",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Jean-Gabriel",
"middle": [],
"last": "Young",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Laurent",
"middle": [],
"last": "H\u00e9bert-Dufresne",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Mirta",
"middle": [],
"last": "Galesic",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [],
"body_text": [
{
"text": "Hateful rhetoric is plaguing online discourse, fostering extreme societal movements and possibly giving rise to real-world violence. A potential solution to this growing global problem is citizen-generated counter speech where citizens actively engage with hate speech to restore civil non-polarized discourse. However, its actual effectiveness in curbing the spread of hatred is unknown and hard to quantify. One major obstacle to researching this question is a lack of large labeled data sets for training automated classifiers to identify counter speech. Here we use a unique situation in Germany where self-labeling groups engaged in organized online hate and counter speech. We use an ensemble learning algorithm which pairs a variety of paragraph embeddings with regularized logistic regression functions to classify both hate and counter speech in a corpus of millions of relevant tweets from these two groups. Our pipeline achieves macro F1 scores on out of sample balanced test sets ranging from 0.76 to 0.97-accuracy in line and even exceeding the state of the art. We then use the classifier to discover hate and counter speech in more than 135,000 fully-resolved Twitter conversations occurring from 2013 to 2018 and study their frequency and interaction. Altogether, our results highlight the potential of automated methods to evaluate the impact of coordinated counter speech in stabilizing conversations on social media.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "\u21e4 Denotes equal contribution.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Hate speech is a growing problem in many countries [Bakalis, 2015 , Hawdon et al., 2017 , it can have serious psychological consequences [Oksanen et al., 2018] , and is related to, and perhaps even contributing to, real-world violence [M\u00fcller and Schwarz, 2019] . While censorship can help curb hate speech [\u00c1lvarez-Benjumea and Winter, 2018] , it can also impinge on civil liberties and might merely disperse rather than reduce hate [Chandrasekharan et al., 2017] . A promising alternative approach to reduce toxic discourse without recourse to censorship is so-called counter speech, which broadly refers to citizens' response to hateful speech in order to stop it, reduce its consequences, and discourage it [Benesch et al., 2016 , Rieger et al., 2018 .",
"cite_spans": [
{
"start": 51,
"end": 65,
"text": "[Bakalis, 2015",
"ref_id": "BIBREF0"
},
{
"start": 66,
"end": 87,
"text": ", Hawdon et al., 2017",
"ref_id": "BIBREF1"
},
{
"start": 137,
"end": 159,
"text": "[Oksanen et al., 2018]",
"ref_id": "BIBREF2"
},
{
"start": 235,
"end": 261,
"text": "[M\u00fcller and Schwarz, 2019]",
"ref_id": "BIBREF3"
},
{
"start": 307,
"end": 342,
"text": "[\u00c1lvarez-Benjumea and Winter, 2018]",
"ref_id": "BIBREF4"
},
{
"start": 434,
"end": 464,
"text": "[Chandrasekharan et al., 2017]",
"ref_id": "BIBREF5"
},
{
"start": 711,
"end": 732,
"text": "[Benesch et al., 2016",
"ref_id": "BIBREF6"
},
{
"start": 733,
"end": 754,
"text": ", Rieger et al., 2018",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "It is unknown, however, whether counter speech is actually effective due to the lack of systematic large-scale studies on its impact [Gaffney et al., 2019 , Gagliardone et al., 2015 . A major reason has been the difficulty of designing automated algorithms for discovering counter speech in large online corpora, stemming mostly from the lack of labeled training sets including both hate and counter speech. Past studies that provided insightful analyses of the effectiveness of counter speech mostly used hand-coded examples and were thus limited to small samples of discourse [Mathew et al., 2018 , Wright et al., 2017 , Ziegele et al., 2018 , Ziems et al., 2020 .",
"cite_spans": [
{
"start": 133,
"end": 154,
"text": "[Gaffney et al., 2019",
"ref_id": "BIBREF8"
},
{
"start": 155,
"end": 181,
"text": ", Gagliardone et al., 2015",
"ref_id": "BIBREF9"
},
{
"start": 578,
"end": 598,
"text": "[Mathew et al., 2018",
"ref_id": "BIBREF10"
},
{
"start": 599,
"end": 620,
"text": ", Wright et al., 2017",
"ref_id": "BIBREF12"
},
{
"start": 621,
"end": 643,
"text": ", Ziegele et al., 2018",
"ref_id": "BIBREF13"
},
{
"start": 644,
"end": 664,
"text": ", Ziems et al., 2020",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We perform the first large-scale classification study of hate and counter speech, using a unique situation in Germany, where self-labeling hate and counter speech groups engaged in discussions around current societal topics such as immigration and elections. One is \"Reconquista Germanica\" (RG), a highly-organized hate group which aimed to disrupt political discussions and promote the right-wing populist, nationalist party Alternative f\u00fcr Deutschland (AfD). At their peak time, RG had between 1,500 and 3,000 active members. The counter group \"Reconquista Internet\" (RI) formed in late April 2018 with the aim of countering RG's hateful messaging through counter speech and to re-balance the public discourse. Within the first week, approximately 45,000 users joined the discord server where RI was being organized. At their peak, RI had an estimated 62,000 registered and verified members, of which over 4,000 were active on their discord server for the first few months. However, RI quickly lost a significant amount of active members, splintering into independent though cooperating smaller groups. We collected millions of tweets from members of these two groups and built labeled training set orders of magnitude larger than past studies of counter speech. By building an ensemble learning system with this large corpus we trained highly accurate classifiers which matched human judgment. We also used this system to study more than 135,000 conversations on German Twitter to understand the interactions between counter and hate groups online-an important first step in studying the impacts of counter speech on a large scale.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "2 Background and Past Research",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "There are many definitions of online hate speech and its meaning is developing over time. According to more narrow definitions, it refers to insults, discrimination, or intimidation of individuals or groups on the Internet, on the grounds of their supposed race, ethnic origin, gender, religion, or political beliefs [Blaya, 2019 , Weber, 2009 . However, the term can also be extended to speech that aims to spread fearful, negative, and harmful stereotypes, call for exclusion or segregation, incite hatred, and encourage violence against a particular group [Gagliardone et al., 2015 [Gagliardone et al., , you, 2019 [Gagliardone et al., , twi, 2019 [Gagliardone et al., , fac, 2019 , be it using words, symbols, images, or other media. Counter speech entails a citizen-generated re-sponse to online hate in order to stop and prevent the spread of hate speech, and if possible discourage it by changing perpetrators' attitudes and prevailing social norms. Counter speech intervention programs focus on empowering Internet users to speak up against online hate [Gagliardone et al., 2015] . For instance, programs such as seriously [ser, 2019] and the Social Media Helpline [smh, 2019] help users to recognize different kinds of online hate and prepare appropriate responses. Counter speech is seen as a feasible way of countering online hate, with a potential to increase civility and deliberation quality of online discussions [Ziegele et al., 2018 , Habermas, 2015 .",
"cite_spans": [
{
"start": 317,
"end": 329,
"text": "[Blaya, 2019",
"ref_id": "BIBREF15"
},
{
"start": 330,
"end": 343,
"text": ", Weber, 2009",
"ref_id": "BIBREF16"
},
{
"start": 559,
"end": 584,
"text": "[Gagliardone et al., 2015",
"ref_id": "BIBREF9"
},
{
"start": 585,
"end": 617,
"text": "[Gagliardone et al., , you, 2019",
"ref_id": null
},
{
"start": 618,
"end": 650,
"text": "[Gagliardone et al., , twi, 2019",
"ref_id": null
},
{
"start": 651,
"end": 683,
"text": "[Gagliardone et al., , fac, 2019",
"ref_id": null
},
{
"start": 1061,
"end": 1087,
"text": "[Gagliardone et al., 2015]",
"ref_id": "BIBREF9"
},
{
"start": 1131,
"end": 1142,
"text": "[ser, 2019]",
"ref_id": null
},
{
"start": 1173,
"end": 1184,
"text": "[smh, 2019]",
"ref_id": null
},
{
"start": 1428,
"end": 1449,
"text": "[Ziegele et al., 2018",
"ref_id": "BIBREF13"
},
{
"start": 1450,
"end": 1466,
"text": ", Habermas, 2015",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Hate and counter speech",
"sec_num": "2.1"
},
{
"text": "There has been a lot of work on developing classifiers to detect hate speech online (e.g., [Brassard-Gourdeau and Khoury, 2018 , Basile et al., 2019 , Burnap et al., 2015 , Burnap and Williams, 2016 , Ribeiro et al., 2018 , Zhang and Luo, 2019 , Bosco et al., 2018 , de Gibert et al., 2018 , Kshirsagar et al., 2018 , MacAvaney et al., 2019 , Malmasi and Zampieri, 2018 , Pitsilis et al., 2018 , Al-Hassan and Al-Dossari, 2019 , Vidgen and Yasseri, 2020 , Zimmerman et al., 2018 ). Many different learning algorithms have been used to perform this classification, ranging from support vector machines and random forests to convolutional and recurrent neural networks [Zhang and Luo, 2019 , Burnap and Williams, 2016 , Bosco et al., 2018 , de Gibert et al., 2018 , Kshirsagar et al., 2018 , Malmasi and Zampieri, 2018 , Pitsilis et al., 2018 , Al-Hassan and Al-Dossari, 2019 , Vidgen and Yasseri, 2020 , Zimmerman et al., 2018 . These algorithms use a variety of feature extraction methods, for example, frequency scores of different n-grams, word and document embeddings [Le and Mikolov, 2014, Pennington et al., 2014] , sentiment scores [Brassard-Gourdeau and Khoury, 2018, Burnap et al., 2015] , part-of-speech scores such as the frequency of adjectives versus nouns used to describe target groups, 'othering' language (e.g., 'we' vs. 'them' [Burnap and Williams, 2016] ), and meta-information about the text authors (e.g., keywords from user bios, usage patterns, their connections based on replies, retweets, and following patterns [Ribeiro et al., 2018] Compared to the number of studies investigating automatic detection of online hate, there have been far fewer studies that aim to automatically detect counter speech. One reason for this is the difficulty and subjectivity of automated identification of counter speech [Kennedy et al., 2017] . As a result, most past studies use hand-coded examples for this task. For instance, Mathew et al. [2019] analyzed more than 9,000 hand-coded counter speech and neutral comments posted in response to hateful YouTube videos. They found that for discriminating counter speech vs. non-counter speech, the combination of tf-idf vectors as features and logistic regression as the classifier performed best, achieving an F1 score of 0.73. In another study, Mathew et al. [2018] analyzed 1,290 pairs of Twitter messages containing hand-coded hate and counter speech. In this data set, a boosting algorithm based mostly on tf-idf values and lexical properties of tweets performed best, achieving F1 score of 0.77. Wright et al. [2017] provide a qualitative analysis of individual examples of counter speech. Ziegele et al. [2018] employed 52 undergraduate students to hand-code 9,763 Facebook messages. A study concurrent to ours [Ziems et al., 2020] investigated hate and counter speech in the context of racist sentiment surrounding COVID-19. They hand-coded 2,319 tweets, of which they labeled 678 as hateful, 359 as counter speech, and 961 as neutral. They were able to achieve F1 scores on unbalanced sets of 0.49 for counter and 0.68 for hate.",
"cite_spans": [
{
"start": 91,
"end": 126,
"text": "[Brassard-Gourdeau and Khoury, 2018",
"ref_id": "BIBREF20"
},
{
"start": 127,
"end": 148,
"text": ", Basile et al., 2019",
"ref_id": "BIBREF21"
},
{
"start": 149,
"end": 170,
"text": ", Burnap et al., 2015",
"ref_id": "BIBREF22"
},
{
"start": 171,
"end": 198,
"text": ", Burnap and Williams, 2016",
"ref_id": "BIBREF23"
},
{
"start": 199,
"end": 221,
"text": ", Ribeiro et al., 2018",
"ref_id": "BIBREF24"
},
{
"start": 222,
"end": 243,
"text": ", Zhang and Luo, 2019",
"ref_id": "BIBREF25"
},
{
"start": 244,
"end": 264,
"text": ", Bosco et al., 2018",
"ref_id": "BIBREF26"
},
{
"start": 265,
"end": 289,
"text": ", de Gibert et al., 2018",
"ref_id": "BIBREF27"
},
{
"start": 290,
"end": 315,
"text": ", Kshirsagar et al., 2018",
"ref_id": "BIBREF28"
},
{
"start": 316,
"end": 340,
"text": ", MacAvaney et al., 2019",
"ref_id": "BIBREF29"
},
{
"start": 341,
"end": 369,
"text": ", Malmasi and Zampieri, 2018",
"ref_id": "BIBREF30"
},
{
"start": 370,
"end": 393,
"text": ", Pitsilis et al., 2018",
"ref_id": "BIBREF31"
},
{
"start": 394,
"end": 426,
"text": ", Al-Hassan and Al-Dossari, 2019",
"ref_id": "BIBREF32"
},
{
"start": 427,
"end": 453,
"text": ", Vidgen and Yasseri, 2020",
"ref_id": "BIBREF33"
},
{
"start": 454,
"end": 478,
"text": ", Zimmerman et al., 2018",
"ref_id": "BIBREF34"
},
{
"start": 667,
"end": 687,
"text": "[Zhang and Luo, 2019",
"ref_id": "BIBREF25"
},
{
"start": 688,
"end": 715,
"text": ", Burnap and Williams, 2016",
"ref_id": "BIBREF23"
},
{
"start": 716,
"end": 736,
"text": ", Bosco et al., 2018",
"ref_id": "BIBREF26"
},
{
"start": 737,
"end": 761,
"text": ", de Gibert et al., 2018",
"ref_id": "BIBREF27"
},
{
"start": 762,
"end": 787,
"text": ", Kshirsagar et al., 2018",
"ref_id": "BIBREF28"
},
{
"start": 788,
"end": 816,
"text": ", Malmasi and Zampieri, 2018",
"ref_id": "BIBREF30"
},
{
"start": 817,
"end": 840,
"text": ", Pitsilis et al., 2018",
"ref_id": "BIBREF31"
},
{
"start": 841,
"end": 873,
"text": ", Al-Hassan and Al-Dossari, 2019",
"ref_id": "BIBREF32"
},
{
"start": 874,
"end": 900,
"text": ", Vidgen and Yasseri, 2020",
"ref_id": "BIBREF33"
},
{
"start": 901,
"end": 925,
"text": ", Zimmerman et al., 2018",
"ref_id": "BIBREF34"
},
{
"start": 1071,
"end": 1078,
"text": "[Le and",
"ref_id": "BIBREF35"
},
{
"start": 1079,
"end": 1118,
"text": "Mikolov, 2014, Pennington et al., 2014]",
"ref_id": null
},
{
"start": 1138,
"end": 1160,
"text": "[Brassard-Gourdeau and",
"ref_id": "BIBREF20"
},
{
"start": 1161,
"end": 1195,
"text": "Khoury, 2018, Burnap et al., 2015]",
"ref_id": null
},
{
"start": 1344,
"end": 1371,
"text": "[Burnap and Williams, 2016]",
"ref_id": "BIBREF23"
},
{
"start": 1536,
"end": 1558,
"text": "[Ribeiro et al., 2018]",
"ref_id": "BIBREF24"
},
{
"start": 1827,
"end": 1849,
"text": "[Kennedy et al., 2017]",
"ref_id": "BIBREF39"
},
{
"start": 1936,
"end": 1956,
"text": "Mathew et al. [2019]",
"ref_id": "BIBREF11"
},
{
"start": 2302,
"end": 2322,
"text": "Mathew et al. [2018]",
"ref_id": "BIBREF10"
},
{
"start": 2557,
"end": 2577,
"text": "Wright et al. [2017]",
"ref_id": "BIBREF12"
},
{
"start": 2651,
"end": 2672,
"text": "Ziegele et al. [2018]",
"ref_id": "BIBREF13"
},
{
"start": 2773,
"end": 2793,
"text": "[Ziems et al., 2020]",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Classification of Hate and Counter Speech",
"sec_num": "2.2"
},
{
"text": "While extremely useful as a first step in analyz-ing counter speech, these studies are intrinsically limited because manual coding of counter speech is costly and hard to scale to the size needed to train sophisticated classifiers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classification of Hate and Counter Speech",
"sec_num": "2.2"
},
{
"text": "We built our corpus of hate speech by collecting the timelines of 2,120 publicly known members of RG using the Twitter API. We used the list of hate accounts known as the \"B\u00f6hmermann Liste,\" which was promoted as a list of accounts that spread hateful rhetoric, promote alt-right propaganda, or engage in directly hateful speech. As a secondary check of the list, we further verified these accounts by ensuring that the names and/or bios of these accounts contained known RG badges and no known badges of RI (see Table S1 in the Supplementary Materials for a list of these features). Finally, we had an expert hand-verify a large random sample of these accounts to ensure they were indeed actively taking part in hate speech. This resulted in more than 4.6 million tweets which with high likelihood contained some hateful rhetoric. While we cannot guarantee that every single one of these tweets was hate speech, given the purpose of these accounts, we can be reasonably confident in labeling the tweets sent from these accounts as hate speech.",
"cite_spans": [],
"ref_spans": [
{
"start": 513,
"end": 521,
"text": "Table S1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Data Collection Strategy",
"sec_num": "3.1"
},
{
"text": "Building our corpus of counter speech was a bit more challenging. We began our search with a hand-curated list of 103 accounts comprised of the core RI Twitter team, each of which were highly focused to their primary objective of engaging in various forms of counter speech. We collected the timelines of each of these known members of RI using the Twitter API. While we were highly confident in this sample of tweets being counter speech, it did not provide enough examples to build balanced training sets. Therefore, to expand our counter speech corpus we also collected the follower-followee network of these 103 core RI members using the Twitter API. This resulted in a list of 70,537 potential counter accounts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Collection Strategy",
"sec_num": "3.1"
},
{
"text": "We narrowed down this list of potential counter accounts by only including users that appeared in at least 5 of the follower-followee networks of core RI members. We then further required that each user self-identify as an RI member by using language features typical of RI members in their bios (see Table S1 in the Supplementary Materials for a list of these features) and also eliminated all users from this subset who used any RG features in their bios, to remove troll accounts that used both classes of features in their bios. Finally, we also enrolled an expert to check many of these accounts to ensure they were actively taking part in counter speech. This process resulted in a total of 1,472 profiles which we labeled as counter accounts.",
"cite_spans": [],
"ref_spans": [
{
"start": 301,
"end": 309,
"text": "Table S1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Data Collection Strategy",
"sec_num": "3.1"
},
{
"text": "To build our corpus of counter speech we collected the timelines of each of these additional accounts as well as the timelines of the core RI Twitter team. This resulted in a total of 4,323,881 tweets which had a high probability of containing counter speech. For training, we labeled all of these tweets as counter speech. It is likely that not all of these tweets in fact contained counter speech, especially those written by users that were identified through our network search. However, one can think of our data gathering process as trading-off some accuracy for a significant increase in scale. To check that accuracy was not strongly impacted, we verified that our classification of counter speech aligned with human judgment after classification (see Section 3.2) and added a post-hoc criterion to eliminate tweets that are not confidently labeled as counter speech (or hate speech) by the trained classifier.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Collection Strategy",
"sec_num": "3.1"
},
{
"text": "In addition to these labeled tweets, we collected 204,544 fully-resolved conversations (reply trees) that grew in response to tweets of prominent accounts engaged in political speech on German Twitter from 2013 to 2018. These included accounts of large news organizations (e.g., faznet, tagesschau, tagesthemen, derspiegel and spiegelonline, diezeit, and zdfheute), well-known journalists and bloggers (e.g., annewilltalk, dunjahayali, janboehm, jkasek, maischberger, nicolediekmann), and politicians (e.g. cem_oezdemir, c_lindner, goeringeckardt, heikomaas, olafscholz, renatekuenast), all of which were known to be targets of hate speech. Indeed, the majority of these conversations involve instances of both hate and counter speech. We focused on 137,725 trees which originated from 11 accounts that contributed trees in at least 69 of 72 possible months throughout the examined period: derspiegel, goeringeckardt, jkasek, olafscholz, regsprecher, zdfheute, c_lindner, faznet, janboehm, nicolediekmann, and tagesschau. The tweets in these trees were used to study the dynamics of hate and counter speech over time and were not used to train or evaluate the accuracy of the classifiers. Figure 1 shows a few example trees labeled using the pipeline described in Section 3.2.",
"cite_spans": [],
"ref_spans": [
{
"start": 1189,
"end": 1197,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Data Collection Strategy",
"sec_num": "3.1"
},
{
"text": "As is common in the literature [Schmidt and Wiegand, 2017] we split our classification pipeline into two stages: extraction of features from text, and classification based on those features. Before tweets were used in this pipeline they went through a minor preprocessing stage. All of the text was made lower case, and hashtags, usernames e.g., @username, punctuation and \"RT:\" were all stripped out of the tweet's text. Finally, depending on the model being trained, we removed stop words using one of two lists (\"heavy\" and \"light\"), or we did not remove any stop words. The \"heavy\" stop word list eliminated 231 German words based on nltk's German stop word list. The \"light\" stop word list was based on the heavy list without all words which have been shown to be relevant identifiers in an \"us vs. them\" discourse [Burnap and Williams, 2016] , e.g., wir, uns, sie (we, us, them). This list eliminated 48 words (see Supplementary Materials) .",
"cite_spans": [
{
"start": 31,
"end": 58,
"text": "[Schmidt and Wiegand, 2017]",
"ref_id": "BIBREF40"
},
{
"start": 820,
"end": 847,
"text": "[Burnap and Williams, 2016]",
"ref_id": "BIBREF23"
}
],
"ref_spans": [
{
"start": 921,
"end": 946,
"text": "Supplementary Materials)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Classification Pipeline",
"sec_num": "3.2"
},
{
"text": "To extract features from each processed timeline tweet, we constructed paragraph embeddings, also known as doc2vec models [Le and Mikolov, 2014] , using the standard gensim implementation [\u0158eh\u016f\u0159ek and Sojka, 2010] . We will refer to a generic doc2vec model as M d2v . We performed a parameter sweep following standard practice and the guidelines of [Lau and Baldwin, 2016] . This sweep includes the analysis of several doc2vec parameters e.g, maximum distance between current and predicted words, \"distributed-memory\" vs \"distributed bag of words\" frameworks, and five different document label types, as well as three levels of stop word removal.",
"cite_spans": [
{
"start": 122,
"end": 144,
"text": "[Le and Mikolov, 2014]",
"ref_id": "BIBREF35"
},
{
"start": 188,
"end": 213,
"text": "[\u0158eh\u016f\u0159ek and Sojka, 2010]",
"ref_id": null
},
{
"start": 349,
"end": 372,
"text": "[Lau and Baldwin, 2016]",
"ref_id": "BIBREF42"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Classification Pipeline",
"sec_num": "3.2"
},
{
"text": "The five different document label types used to train M d2v were as follows. 1) Each tweet was treated as a single document and labeled with a unique label, viz., the unique tweet-id assigned by Twitter. 2) All tweets by a single author used the same document label, viz., the user-id assigned by Twitter. This effectively made every tweet by a particular user a single document. These are the more traditional choices for document labeling. We also used three other labels which incorporate the classification stage into the feature development: 3) Each tweet was assigned a group label, with all tweets from RG accounts labeled \"hate\" and all Figure 1 : Examples of Twitter conversations (reply trees) with labeled hate (red), counter (blue), and neutral speech (white). The root node is shown as a large square. We used a confidence threshold of = 0.75 and a panel of 25 experts to classify these tweets, as described later in Section 3.2. tweets from RI labeled \"counter\". This treats all RG tweets as one document and all RI tweets as another document, incorporating the the label we care about into the feature development stage. However, it conflates all the tweets into two documents. To avoid this we also trained M d2v using multi-label setups. In particular, we trained models where we 4) labeled each tweet using both the author's identifier as well as the group identifier and separate models which 5) labeled each tweet with a unique identifier as well as the group identifier.",
"cite_spans": [],
"ref_spans": [
{
"start": 645,
"end": 653,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Classification Pipeline",
"sec_num": "3.2"
},
{
"text": "Every M d2v was trained on five different but partially overlapping training sets (approximately 27% overlap). Each training set included 500,000 randomly selected tweets originating from RG accounts and another 500,000 coming from RI accounts. This produced a balanced training set with 50% hate speech and 50% counter speech. This is important in interpreting our classification results correctly, and avoiding accuracy inflation due to unbalanced sets, an apparent frequent problem with much of the current literature where hate speech is highly under sampled [Zhang and Luo, 2019, MacAvaney et al., 2019] . We refer to these training sets as T in,i , to denote the i th in-sample training set.",
"cite_spans": [
{
"start": 563,
"end": 573,
"text": "[Zhang and",
"ref_id": "BIBREF25"
},
{
"start": 574,
"end": 608,
"text": "Luo, 2019, MacAvaney et al., 2019]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Classification Pipeline",
"sec_num": "3.2"
},
{
"text": "Let {M d2v , T in,i } be a trained doc2vec model and the corresponding training set it was trained on. For each tweet t j 2 T in,i we use M d2v to infer a corresponding feature vector x j 2 R 300 , as x j = M d2v (t j ). With each tweet mapped to a feature vector we constructed a decision boundary between tweets from RG members and tweets from RI members using regularized logistic regression. In other words, we wrote the likelihood that tweet j is labeled as coming from an RG/RI account as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classification Pipeline",
"sec_num": "3.2"
},
{
"text": "h \u2713 (x j ) = g(\u2713 T x j ), g(z) = 1 1 + e z . (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classification Pipeline",
"sec_num": "3.2"
},
{
"text": "where \u2713 2 R 300 is the vector of feature weights. Given a set of labels L = {H, C} for all tweets, we then learned the vector \u2713 that best separates the data by minimizing the loss",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classification Pipeline",
"sec_num": "3.2"
},
{
"text": "P j log h \u2713 (x j ) un- der an`2 regularization constraint 1 ||\u2713|| 2 , where",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classification Pipeline",
"sec_num": "3.2"
},
{
"text": "is a fixed regularization parameter. We finally solved for \u2713 using the the LBFGS algorithm as implemented in scikit-learn [Pedregosa et al., 2011] .",
"cite_spans": [
{
"start": 122,
"end": 146,
"text": "[Pedregosa et al., 2011]",
"ref_id": "BIBREF43"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Classification Pipeline",
"sec_num": "3.2"
},
{
"text": "To evaluate the accuracy of the resulting hypothesis function h \u2713 we evaluated its predictive accuracy on an out of sample test set denoted T out,i . Each out of sample test set T out,i consisted of 50,000 tweets from both groups, chosen at random while ensuring that T out,i \\ T in,i = ;. For each, M d2v , h \u2713 , T out,i combination we determined the probability of each class label l 2 L for each t 2 T out,i . In particular, for each tweet t 2 T out,i and each label l 2 {H, C} we computed h \u2713 (M d2v (t)) = p(l|M d2v (t); \u2713), where p(l|M d2v (t); \u2713) denotes the probability that a tweet t has label l when classified with the feature vector calculated with model M d2v . The accuracy of this prediction was then assessed against the known labels.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classification Pipeline",
"sec_num": "3.2"
},
{
"text": "In addition to logistic regressions, we also used word bias and n-gram based classifiers like those used in [Jaki and De Smedt, 2019] , as well as xgboost [Chen and Guestrin, 2016] with a variety of parameters. However, in both cases the accuracy was worse (only slightly so for xgboost) than the logistic regression experiments reported in Section 4, so we omit these details for brevity.",
"cite_spans": [
{
"start": 108,
"end": 133,
"text": "[Jaki and De Smedt, 2019]",
"ref_id": "BIBREF44"
},
{
"start": 155,
"end": 180,
"text": "[Chen and Guestrin, 2016]",
"ref_id": "BIBREF45"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Classification Pipeline",
"sec_num": "3.2"
},
{
"text": "Instead of looking for the single optimal (M d2v , h \u2713 ) parameterization we used an ensemble learning approach to classification by constructing a \"panel of experts.\" The panel is comprised of N experts which are defined to be the combination of a feature extraction method M d2v as well as a classification or hypothesis function h \u2713 . An ensemble learning approach combines multiple hypothesis functions to form a more robust hypothesis jointly which can lead to greater generalizability and increased out-of-sample accuracy. In this ensemble classification method, each expert is given a tweet in a balanced out-of-sample test set T out,i and asked to assign to it a probability that it belongs to each class l 2 L. For each tweet t 2 T out,i we computed a hate and counter score, S h and S c respectively, in the following way:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classification Pipeline",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "S h (t) = 1 N N X i=1 E i (t; H),",
"eq_num": "(2)"
}
],
"section": "Classification Pipeline",
"sec_num": "3.2"
},
{
"text": "where E i (t; l) is the probability that expert i assigns to label l 2 {H, C} for tweet t. Note that S c (t) = 1 S h (t). Note that T out,i \\T in,j need not be empty when i 6 = j. As such, if a tweet appeared in the training set of an expert, we withheld its vote to avoid leaking training data. For final classification we then defined a \"confidence threshold\" 2 [1/2, 1], and used a confidence voting system with thresholding to assign labels to tweets. If S h (t) > then t is labeled H, and if S c (t) > then t is labeled C. If S c (t) and S h (t) are both less than the given threshold the tweet is marked as neutral speech and the panel effectively abstains from voting. This results in some tweets which the panel of experts is not confident labeling as hate or counter speech-a crucial feature of the classifier. Indeed, the primary goal of our classifier is to identify hate and counter speech in online political discourse. Since not all online political discourse is hate or counter speech, the classifier must be able to flag neutral speech, too. The confidence threshold allows us to identify neutral speech by contrasting it with counter and hate speech.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classification Pipeline",
"sec_num": "3.2"
},
{
"text": "An alternative solution would be to build a ternary classifier that can distinguish between hate, counter and neutral speech. A big challenge would then be to obtain a corpus of neutral speech relevant to political discourse, yet free of hate or counter speech. One could use tweets from politicians or news outlets to build a neutral corpus but then one would have to be careful to have a balanced rep-resentation across the political spectrum so that the neutral class is not biased toward the speech patterns of a particular party or news outlet. As we were confident in our hate and counter speech labeling and not confident in labeling an unbiased neutral corpus we chose to use our ranked classification method instead. However, this is certainly a potentially fruitful area for future research.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classification Pipeline",
"sec_num": "3.2"
},
{
"text": "To test whether the automated classifier corresponded to human judgment, we conducted a crowdsourcing study in which human judges evaluated some of the same tweets evaluated by the classifier. Since our corpus mostly contained German tweets, judges were recruited among members of Mechanical Turk who indicated that they can speak German. To qualify, they had to complete a relatively difficult German test item taken from a Goethe Institut's test for B1 German level, which asked them to interpret comments of three individuals about violence in video games. They also evaluated a test sample of a few dozen tweets from across the score spectrum. We also checked their answers to ensure a basic level of conscientiousness. Of the initial 55 candidate raters, 28 raters both solved the German test correctly and gave ratings to the test sample of tweets which indicated that they paid attention to their content. These 28 raters were asked to evaluate 5000 randomly selected tweets evenly spread across the whole range of scores S h (t). Raters ranked tweets on a scale of 1 to 5, from \"very likely counter speech\" to \"very likely hate speech,\" with 3 corresponding to neutral content. Each tweet was evaluated by at least 2 different raters. Standard inter-rater reliability measures are not possible because most tweets were evaluated by a different pair of raters. However, median difference in ratings of each tweet was 0, and absolute mean was 0.57. In other words, different raters evaluated the tweets well within one point on our 5-point scale, suggesting reasonable correspondence of evaluations by different raters.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Crowdsourcing",
"sec_num": "3.3"
},
{
"text": "Classification Results All combinations of feature extraction models M d2v and classification functions h \u2713 produced a total of N = 289 possible experts. We found that the top 10 highest performing parameter sets across all five balanced training sets were the same for all M d2v . In particular, a max-imum distance between the current and predicted word within a sentence of 5, ignoring all words that occurred less than 10 times, and an initial learning rate of 0.025 with a minimum learning rate of 0.00025, resulted in the highest accuracy. Each of these top performing experts were trained for 20 epochs, with a distributed bag-of-words framework. The optimal preprocessing parameters were also the same across the top 10. Each of these used light stop word removal. The optimal document labeling was also the same across these 10 experts, namely the author-group labeling. Recall that this labeling scheme tagged each tweet with the authors' unique identifier as well as the known group identification (RG vs RI). While the top models had the same training parameters aside from , the models were trained across 5 different training sets, which led to experts that could differ significantly. These top 10 experts had individual F1 scores of 0.755 \u00b1 0.0012 on their individual test sets, when forced to make a classification (confidence = 1/2). Taking into account that each T out,i was balanced, containing 50,000 hate tweets and 50,000 counter tweets, these F1 scores do not suffer from accuracy inflation that would occur with an unbalanced test set [Zhang and Luo, 2019, MacAvaney et al., 2019] . This result compares well to previous studies that used smaller unbalanced data sets and achieved F1 scores ranging from 0.49 to 0.77 [Mathew et al., 2019 , Ziems et al., 2020 .",
"cite_spans": [
{
"start": 1560,
"end": 1570,
"text": "[Zhang and",
"ref_id": "BIBREF25"
},
{
"start": 1571,
"end": 1605,
"text": "Luo, 2019, MacAvaney et al., 2019]",
"ref_id": null
},
{
"start": 1742,
"end": 1762,
"text": "[Mathew et al., 2019",
"ref_id": "BIBREF11"
},
{
"start": 1763,
"end": 1783,
"text": ", Ziems et al., 2020",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4"
},
{
"text": "As mentioned in Section 3, we did not only use experts in isolation, but also in an ensemble learning approach where the experts could vote on the class label for each tweet in a given test set. Due to variations in the training sets and parameters, each expert had a slightly different view of the language, suggesting that combining their knowledge might be beneficial. Using the top 10 experts as a panel, instead of individually as just discussed, we obtained an improved average F1 score across all 5 out-of-sample test sets of 0.7616 \u00b1 0.00083. Increasing the size of the panel to include the top 25 experts resulted in an average F1 score across the 5 test sets of 0.7618 \u00b1 0.0007, see Table 1 . We used this large panel for all of our subsequent results.",
"cite_spans": [],
"ref_spans": [
{
"start": 693,
"end": 700,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "4"
},
{
"text": "We also obtained improved results when we varied confidence threshold and allowed the experts to withhold their vote on contentious tweets. Increasing the confidence threshold naturally decreased the number of tweets classified as hate or counter speech. As expected, we found that this led to an increased overall precision, recall, and F1 score, since the labeled tweets were those for which the panel was more certain (see Table 1 ).",
"cite_spans": [],
"ref_spans": [
{
"start": 426,
"end": 433,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "4"
},
{
"text": "The scores for > 1/2 should be viewed with cautious optimism. The thresholding procedure causes many examples-correctly and incorrectly labeled-to be ignored from these calculations, which obviously may bias these scores in unpredictable ways. Even so, these scores provide a rough approximation of how accurate we can expect the classifier to be when applied to the reply tree dataset at a given confidence threshold. As discussed earlier, while this makes these scores challenging to interpret, the threshold is necessary to avoid mislabeling neutral speech in these conversations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4"
},
{
"text": "Comparison to Human Judgment Our crowdsourcing results, shown in Figure 2 , suggest that our automated classifier aligns well with human judgment. Overall correlation between classifier scores and human judgments was r = 0.94. The correlation was somewhat lower for tweets classified as counter speech (r = 0.75) than for those classified as hate (r = 0.96). This could indicate that to humans counter speech looks more like 'neutral' discourse than hate speech does, or this could be a reflection of the slightly weaker counter speech labeling scheme described in Section 3.1 and suggest that counter speech is more challenging for the classifier to identify, or a combination of both. As expected, classifier scores around 0.5 received intermediate hate scores from human judges as well. The labels assigned by human judges were not used during the classification training or annotation process and were only used as a sanity check on the alignment of the classifier with human judgment.",
"cite_spans": [],
"ref_spans": [
{
"start": 65,
"end": 73,
"text": "Figure 2",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "4"
},
{
"text": "We next used our classifier to label 137,725 fully-resolved conversations (reply trees) related to current societal and political issues on German Twitter between 2013 and 2018. Due to limited space, here we focus our analysis on two primary questions. For the interested reader, please see [Garland et al., 2020] for a much deeper analysis of this rich dataset.",
"cite_spans": [
{
"start": 291,
"end": 313,
"text": "[Garland et al., 2020]",
"ref_id": "BIBREF46"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Tree Coloring and Analysis",
"sec_num": null
},
{
"text": "First, how do hate and counter speech develop over time? To study this, we calculated the proportion of hate and counter speech of all speech occurring in each month (using = 0.75), as well as the average hate and counter score for all tweets Figure 3 shows, the proportion of hate speech was rather stable throughout the examined period, slightly increasing towards the end (red line in the left panel). However, its average score was consistently increasing over time (red line in the right panel). The proportion of counter speech was increasing somewhat throughout this period (blue line in the left panel), but its score increased quite strongly towards more extreme speech (blue line in the right panel). A notable change occurred in May 2018, when RI became active: the proportion of counter and other speech increased, and the proportion as well as extremity of hate speech decreased in the following months. This result suggests that organized counter speech might have helped in balancing polarized and hateful discourse, although causality is difficult to establish given the complex web of online and 2 0 1 3 -0 1 2 0 1 3 -0 7 2 0 1 4 -0 1 2 0 1 4 -0 7 2 0 1 5 -0 1 2 0 1 5 -0 7 2 0 1 6 -0 1 2 0 1 6 -0 7 2 0 1 7 -0 1 2 0 1 7 -0 7 2 0 1 8 -0 1 2 0 1 8 -0 7 2 0 1 3 -0 1 2 0 1 3 -0 7 2 0 1 4 -0 1 2 0 1 4 -0 7 2 0 1 5 -0 1 2 0 1 5 -0 7 2 0 1 6 -0 1 2 0 1 6 -0 7 2 0 1 7 -0 1 2 0 1 7 -0 7 2 0 1 8 -0 1 2 0 1 8 -0 7 offline events and process in the broader society throughout that time. Second, we conducted an initial analysis of how hate and counter speech interact in reply trees. We asked, how do tweets identified as hate or counter speech change the expected frequency of future hate and counter speech in a reply tree? For this analysis, we used reply trees that have at least 10 tweets identified as hate and at least 10 identified as counter speech, using a 70% threshold on scores assigned by a panel of the top 25 experts. We measured the overall frequency of assigned labels in every individual tree, and tracked how this frequency increases or decreases in time as more tweets identified as hate or counter are posted. We compared 6-month periods before and after the establishment of RI. Results are shown in Figure 4 . Before RI was founded (Figure 4a ), a low amount of hate tweets Figure 4: Frequency of hate, counter, and other tweets following a hate (counter) tweet, normalized by the overall frequency of these types of tweets in a tree. Panel (a) shows the 6-months period before the establishment of RI, and panel (b) shows the 6-months period after RI was formed. By comparing the right panels of both (a) and (b), tweets from organized counter speech tend to attract more counter and other speech and attract less hate speech than tweets from nonorganized counter speech.",
"cite_spans": [],
"ref_spans": [
{
"start": 243,
"end": 251,
"text": "Figure 3",
"ref_id": "FIGREF2"
},
{
"start": 2233,
"end": 2241,
"text": "Figure 4",
"ref_id": null
},
{
"start": 2266,
"end": 2276,
"text": "(Figure 4a",
"ref_id": null
}
],
"eq_spans": [],
"section": "Tree Coloring and Analysis",
"sec_num": null
},
{
"text": "(first panel) somewhat attracted additional hate and suppressed counter speech. However, once many hate tweets were posted, counter speech increased and hate decreased. Similarly, counter tweets (second panel) did not have much effect on hate at first but once there were many counter tweets in a tree they attracted much more hate speech. Importantly, counter speech attracted less hate and stimulated additional counter speech more effectively after RI was formed in April 2018 (Figure 4b) . In all time periods, we also found that counter speech tweets were more likely than hate speech to stimulate neutral or unclassified speech; suggesting that counter speech contributed to depolarizing individual discussions. Taken together, these results suggest that organized counter speech was associated with a more balanced discourse, reflected in an increased proportion of counter speech in discussions and reduced extremity of hate ( Figure 3 ) and counter speech having a strong influence in attracting more counter and neutral speech while not attracting more hate (Figure 4) .",
"cite_spans": [],
"ref_spans": [
{
"start": 480,
"end": 491,
"text": "(Figure 4b)",
"ref_id": null
},
{
"start": 935,
"end": 943,
"text": "Figure 3",
"ref_id": "FIGREF2"
},
{
"start": 1068,
"end": 1078,
"text": "(Figure 4)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Tree Coloring and Analysis",
"sec_num": null
},
{
"text": "Online hate speech is a problem shared by every social media platform, and yet there are still no clear solutions to this growing problem. A potential solution aimed at returning online discourse to civility is citizen-generated counter speech. Until now, studying counter speech and its effectiveness has been limited to small-scale hand-labeled studies. In this paper, we leveraged a unique situation in Germany to perform the first large-scale automated classification of counter speech. Our methods provided F1 scores on a balanced set of 100,000 out-of-sample tweets ranging from 0.76 to 0.97 depending on the confidence threshold being used. Beyond accuracy measures, we used crowdsourcing to verify that the conclusions reached by our classifier were in-line with human judgment.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "We were able to use this classification algorithm to identify hate and counter speech in over 135,000 fully resolved Twitter conversations from 2013-2018. Our results suggest that counter speech might have contributed to depolarization of discussions and that organized counter speech by RI might have stimulated further counter speech and attracted less hateful responses. While causality cannot be established due to the many other ongoing societal processes at the time, our results suggest that organized counter speech may be a powerful solution to combating the spread of hate online. We hope that the framework developed in this paper will be a starting point to understand the dynamics between hate and counter speech and help develop actionable strategies.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
}
],
"back_matter": [
{
"text": "The authors would like to thank Will Tracy and Santa Fe Institute's Applied Complexity team for support and resources throughout this project. J.G. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Cyberhate: An issue of continued concern for the council of Europe's anti-racism commission",
"authors": [
{
"first": "Chara",
"middle": [],
"last": "Bakalis",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chara Bakalis. Cyberhate: An issue of continued concern for the council of Europe's anti-racism commission. Council of Europe, 2015.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Exposure to online hate in four nations: A cross-national consideration",
"authors": [
{
"first": "James",
"middle": [],
"last": "Hawdon",
"suffix": ""
},
{
"first": "Atte",
"middle": [],
"last": "Oksanen",
"suffix": ""
},
{
"first": "Pekka",
"middle": [],
"last": "R\u00e4s\u00e4nen",
"suffix": ""
}
],
"year": 2017,
"venue": "Deviant Behav",
"volume": "38",
"issue": "3",
"pages": "254--266",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "James Hawdon, Atte Oksanen, and Pekka R\u00e4s\u00e4nen. Exposure to online hate in four nations: A cross-national considera- tion. Deviant Behav., 38(3):254-266, 2017.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Perceived societal fear and cyberhate after the November 2015 paris terrorist attacks. Terror. Political Violence",
"authors": [
{
"first": "Atte",
"middle": [],
"last": "Oksanen",
"suffix": ""
},
{
"first": "Markus",
"middle": [],
"last": "Kaakinen",
"suffix": ""
},
{
"first": "Jaana",
"middle": [],
"last": "Minkkinen",
"suffix": ""
},
{
"first": "Pekka",
"middle": [],
"last": "R\u00e4s\u00e4nen",
"suffix": ""
},
{
"first": "Bernard",
"middle": [],
"last": "Enjolras",
"suffix": ""
},
{
"first": "Kari",
"middle": [],
"last": "Steen-Johnsen",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "1--20",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Atte Oksanen, Markus Kaakinen, Jaana Minkkinen, Pekka R\u00e4s\u00e4nen, Bernard Enjolras, and Kari Steen-Johnsen. Per- ceived societal fear and cyberhate after the November 2015 paris terrorist attacks. Terror. Political Violence, pages 1-20, 2018.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Fanning the flames of hate: Social media and hate crime",
"authors": [
{
"first": "Karsten",
"middle": [],
"last": "M\u00fcller",
"suffix": ""
},
{
"first": "Carlo",
"middle": [],
"last": "Schwarz",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Karsten M\u00fcller and Carlo Schwarz. Fanning the flames of hate: Social media and hate crime. SSRN:3082972, 2019.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Normative change and culture of hate: An experiment in online environments",
"authors": [
{
"first": "Amalia",
"middle": [],
"last": "\u00c1lvarez",
"suffix": ""
},
{
"first": "-",
"middle": [],
"last": "Benjumea",
"suffix": ""
},
{
"first": "Fabian",
"middle": [],
"last": "Winter",
"suffix": ""
}
],
"year": 2018,
"venue": "Eur. Sociol. Rev",
"volume": "34",
"issue": "3",
"pages": "223--237",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Amalia \u00c1lvarez-Benjumea and Fabian Winter. Normative change and culture of hate: An experiment in online envi- ronments. Eur. Sociol. Rev., 34(3):223-237, 2018.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "You can't stay here: The efficacy of Reddit's 2015 ban examined through hate speech",
"authors": [
{
"first": "Eshwar",
"middle": [],
"last": "Chandrasekharan",
"suffix": ""
},
{
"first": "Umashanthi",
"middle": [],
"last": "Pavalanathan",
"suffix": ""
},
{
"first": "Anirudh",
"middle": [],
"last": "Srinivasan",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Glynn",
"suffix": ""
},
{
"first": "Jacob",
"middle": [],
"last": "Eisenstein",
"suffix": ""
},
{
"first": "Eric",
"middle": [],
"last": "Gilbert",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the ACM on Human-Computer Interaction",
"volume": "1",
"issue": "",
"pages": "1--22",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eshwar Chandrasekharan, Umashanthi Pavalanathan, Anirudh Srinivasan, Adam Glynn, Jacob Eisenstein, and Eric Gilbert. You can't stay here: The efficacy of Reddit's 2015 ban examined through hate speech. In Proceedings of the ACM on Human-Computer Interaction, volume 1, pages 1-22, 2017.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Considerations for successful counterspeech",
"authors": [
{
"first": "S",
"middle": [],
"last": "Benesch",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ruths",
"suffix": ""
},
{
"first": "H M",
"middle": [],
"last": "Dillon",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Saleem",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Wright",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S Benesch, D Ruths, KP Dillon, H M Saleem, and L Wright. Considerations for successful counterspeech, 2016. URL https://https://dangerousspeech.org/ considerations-for-successful-counterspeech/.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Hate and counter-voices in the internet: Introduction to the special issue",
"authors": [
{
"first": "Diana",
"middle": [],
"last": "Rieger",
"suffix": ""
},
{
"first": "Josephine",
"middle": [
"B"
],
"last": "Schmitt",
"suffix": ""
},
{
"first": "Lena",
"middle": [],
"last": "Frischlich",
"suffix": ""
}
],
"year": 2018,
"venue": "SCM Stud. Commun. Media",
"volume": "7",
"issue": "4",
"pages": "459--472",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Diana Rieger, Josephine B Schmitt, and Lena Frischlich. Hate and counter-voices in the internet: Introduction to the spe- cial issue. SCM Stud. Commun. Media, 7(4):459-472, 2018.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Are cyberbullying intervention and prevention programs effective? A systematic and metaanalytical review",
"authors": [
{
"first": "Hannah",
"middle": [],
"last": "Gaffney",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "David",
"suffix": ""
},
{
"first": "Dorothy",
"middle": [
"L"
],
"last": "Farrington",
"suffix": ""
},
{
"first": "Maria",
"middle": [
"M"
],
"last": "Espelage",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ttofi",
"suffix": ""
}
],
"year": 2019,
"venue": "Aggress. Violent Behav",
"volume": "45",
"issue": "",
"pages": "134--153",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hannah Gaffney, David P Farrington, Dorothy L Espelage, and Maria M Ttofi. Are cyberbullying intervention and prevention programs effective? A systematic and meta- analytical review. Aggress. Violent Behav., 45:134-153, 2019.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Countering online hate speech",
"authors": [
{
"first": "Iginio",
"middle": [],
"last": "Gagliardone",
"suffix": ""
},
{
"first": "Danit",
"middle": [],
"last": "Gal",
"suffix": ""
},
{
"first": "Thiago",
"middle": [],
"last": "Alves",
"suffix": ""
},
{
"first": "Gabriela",
"middle": [],
"last": "Martinez",
"suffix": ""
}
],
"year": 2015,
"venue": "Unesco Publishing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Iginio Gagliardone, Danit Gal, Thiago Alves, and Gabriela Martinez. Countering online hate speech. Unesco Publish- ing, 2015.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Analyzing the hate and counter speech accounts on Twitter",
"authors": [
{
"first": "Binny",
"middle": [],
"last": "Mathew",
"suffix": ""
},
{
"first": "Navish",
"middle": [],
"last": "Kumar",
"suffix": ""
},
{
"first": "Pawan",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Animesh",
"middle": [],
"last": "Mukherjee",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1812.02712"
]
},
"num": null,
"urls": [],
"raw_text": "Binny Mathew, Navish Kumar, Pawan Goyal, and Animesh Mukherjee. Analyzing the hate and counter speech ac- counts on Twitter. arXiv:1812.02712, 2018.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Thou shalt not hate: Countering online hate speech",
"authors": [
{
"first": "Binny",
"middle": [],
"last": "Mathew",
"suffix": ""
},
{
"first": "Punyajoy",
"middle": [],
"last": "Saha",
"suffix": ""
},
{
"first": "Hardik",
"middle": [],
"last": "Tharad",
"suffix": ""
},
{
"first": "Subham",
"middle": [],
"last": "Rajgaria",
"suffix": ""
},
{
"first": "Prajwal",
"middle": [],
"last": "Singhania",
"suffix": ""
},
{
"first": "Pawan",
"middle": [],
"last": "Suman Kalyan Maity",
"suffix": ""
},
{
"first": "Animesh",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mukherjee",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the International AAAI Conference on Web and Social Media",
"volume": "13",
"issue": "",
"pages": "369--380",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Binny Mathew, Punyajoy Saha, Hardik Tharad, Subham Ra- jgaria, Prajwal Singhania, Suman Kalyan Maity, Pawan Goyal, and Animesh Mukherjee. Thou shalt not hate: Countering online hate speech. In Proceedings of the In- ternational AAAI Conference on Web and Social Media, volume 13, pages 369-380, 2019.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Vectors for counterspeech on Twitter",
"authors": [
{
"first": "Lucas",
"middle": [],
"last": "Wright",
"suffix": ""
},
{
"first": "Derek",
"middle": [],
"last": "Ruths",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Kelly",
"suffix": ""
},
{
"first": "Haji",
"middle": [
"Mohammad"
],
"last": "Dillon",
"suffix": ""
},
{
"first": "Susan",
"middle": [],
"last": "Saleem",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Benesch",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the first workshop on abusive language online",
"volume": "",
"issue": "",
"pages": "57--62",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lucas Wright, Derek Ruths, Kelly P Dillon, Haji Mohammad Saleem, and Susan Benesch. Vectors for counterspeech on Twitter. In Proceedings of the first workshop on abusive language online, pages 57-62, 2017.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Journalistic counter-voices in comment sections: Patterns, determinants, and potential consequences of interactive moderation of uncivil user comments",
"authors": [
{
"first": "Marc",
"middle": [],
"last": "Ziegele",
"suffix": ""
},
{
"first": "Pablo",
"middle": [],
"last": "Jost",
"suffix": ""
},
{
"first": "Marike",
"middle": [],
"last": "Bormann",
"suffix": ""
},
{
"first": "Dominique",
"middle": [],
"last": "Heinbach",
"suffix": ""
}
],
"year": 2018,
"venue": "SCM Stud. Commun. Media",
"volume": "7",
"issue": "4",
"pages": "525--554",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marc Ziegele, Pablo Jost, Marike Bormann, and Dominique Heinbach. Journalistic counter-voices in comment sections: Patterns, determinants, and potential consequences of in- teractive moderation of uncivil user comments. SCM Stud. Commun. Media, 7(4):525-554, 2018.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Racism is a virus: Anti-asian hate and counterhate in social media during the COVID-19 crisis",
"authors": [
{
"first": "Caleb",
"middle": [],
"last": "Ziems",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Sandeep",
"middle": [],
"last": "Soni",
"suffix": ""
},
{
"first": "Srijan",
"middle": [],
"last": "Kumar",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2005.12423"
]
},
"num": null,
"urls": [],
"raw_text": "Caleb Ziems, Bing He, Sandeep Soni, and Srijan Kumar. Racism is a virus: Anti-asian hate and counterhate in so- cial media during the COVID-19 crisis. arXiv:2005.12423, 2020.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Cyberhate: A review and content analysis of intervention strategies",
"authors": [
{
"first": "Catherine",
"middle": [],
"last": "Blaya",
"suffix": ""
}
],
"year": 2019,
"venue": "Aggress. Violent Behav",
"volume": "45",
"issue": "",
"pages": "163--172",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Catherine Blaya. Cyberhate: A review and content analysis of intervention strategies. Aggress. Violent Behav., 45: 163-172, 2019.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Youtube: Hate speech policy",
"authors": [
{
"first": "Anne",
"middle": [],
"last": "Weber",
"suffix": ""
}
],
"year": 2009,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anne Weber. Manual on hate speech. Council Of Europe, 2009. Youtube: Hate speech policy, 2019. URL https:",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Twitter: Hateful conduct policy",
"authors": [],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "//support.google.com/youtube/answer/ 2801939. Twitter: Hateful conduct policy, 2019. URL https://help.twitter. com/en/rules-and-policies/ hateful-conduct-policy. Facebook: Hate speech, 2019. URL https:",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Social Media Helpline",
"authors": [
{
"first": "",
"middle": [],
"last": "Seriously",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Seriously, 2019. URL http://www.seriously.ong. Social Media Helpline, 2019. URL https://socialmediahelpline.com/ counterspeech-dos-and-donts-for-students/.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Between facts and norms: Contributions to a discourse theory of law and democracy",
"authors": [
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Habermas",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J\u00fcrgen Habermas. Between facts and norms: Contributions to a discourse theory of law and democracy. John Wiley & Sons, 2015.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Impact of sentiment detection to recognize toxic and subversive online comments",
"authors": [
{
"first": "\u00c9loi",
"middle": [],
"last": "Brassard",
"suffix": ""
},
{
"first": "-",
"middle": [],
"last": "Gourdeau",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Khoury",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1812.01704"
]
},
"num": null,
"urls": [],
"raw_text": "\u00c9loi Brassard-Gourdeau and Richard Khoury. Impact of sen- timent detection to recognize toxic and subversive online comments. arXiv:1812.01704, 2018.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Semeval-2019 task 5: Multilingual detection of hate speech against immigrants and women in twitter",
"authors": [
{
"first": "Valerio",
"middle": [],
"last": "Basile",
"suffix": ""
},
{
"first": "Cristina",
"middle": [],
"last": "Bosco",
"suffix": ""
},
{
"first": "Elisabetta",
"middle": [],
"last": "Fersini",
"suffix": ""
},
{
"first": "Nozza",
"middle": [],
"last": "Debora",
"suffix": ""
},
{
"first": "Viviana",
"middle": [],
"last": "Patti",
"suffix": ""
},
{
"first": "Francisco Manuel Rangel",
"middle": [],
"last": "Pardo",
"suffix": ""
},
{
"first": "Paolo",
"middle": [],
"last": "Rosso",
"suffix": ""
},
{
"first": "Manuela",
"middle": [],
"last": "Sanguinetti",
"suffix": ""
}
],
"year": 2019,
"venue": "13th International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "54--63",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Valerio Basile, Cristina Bosco, Elisabetta Fersini, Nozza Deb- ora, Viviana Patti, Francisco Manuel Rangel Pardo, Paolo Rosso, Manuela Sanguinetti, et al. Semeval-2019 task 5: Multilingual detection of hate speech against immigrants and women in twitter. In 13th International Workshop on Semantic Evaluation, pages 54-63. Association for Com- putational Linguistics, 2019.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Detecting tension in online communities with computational Twitter analysis",
"authors": [
{
"first": "Pete",
"middle": [],
"last": "Burnap",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Omer",
"suffix": ""
},
{
"first": "Nick",
"middle": [],
"last": "Rana",
"suffix": ""
},
{
"first": "Matthew",
"middle": [],
"last": "Avis",
"suffix": ""
},
{
"first": "William",
"middle": [],
"last": "Williams",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Housley",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Edwards",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Morgan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Sloan",
"suffix": ""
}
],
"year": 2015,
"venue": "Technol. Forecast. Soc. Change",
"volume": "95",
"issue": "",
"pages": "96--108",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pete Burnap, Omer F Rana, Nick Avis, Matthew Williams, William Housley, Adam Edwards, Jeffrey Morgan, and Luke Sloan. Detecting tension in online communities with computational Twitter analysis. Technol. Forecast. Soc. Change, 95:96-108, 2015.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Us and them: identifying cyber hate on Twitter across multiple protected characteristics",
"authors": [
{
"first": "Pete",
"middle": [],
"last": "Burnap",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Matthew",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Williams",
"suffix": ""
}
],
"year": 2016,
"venue": "EPJ Data science",
"volume": "5",
"issue": "1",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pete Burnap and Matthew L Williams. Us and them: iden- tifying cyber hate on Twitter across multiple protected characteristics. EPJ Data science, 5(1):11, 2016.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Characterizing and detecting hateful users on Twitter",
"authors": [
{
"first": "Pedro",
"middle": [
"H"
],
"last": "Manoel Horta Ribeiro",
"suffix": ""
},
{
"first": "Yuri",
"middle": [
"A"
],
"last": "Calais",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Santos",
"suffix": ""
},
{
"first": "A",
"middle": [
"F"
],
"last": "Virg\u00edlio",
"suffix": ""
},
{
"first": "Wagner",
"middle": [],
"last": "Almeida",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Meira",
"suffix": ""
}
],
"year": 2018,
"venue": "Twelfth international AAAI conference on web and social media",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Manoel Horta Ribeiro, Pedro H Calais, Yuri A Santos, Virg\u00edlio AF Almeida, and Wagner Meira Jr. Character- izing and detecting hateful users on Twitter. In Twelfth international AAAI conference on web and social media, 2018.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Hate speech detection: A solved problem? the challenging case of long tail on",
"authors": [
{
"first": "Ziqi",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Lei",
"middle": [],
"last": "Luo",
"suffix": ""
}
],
"year": 2019,
"venue": "Twitter. Semantic Web",
"volume": "10",
"issue": "5",
"pages": "925--945",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ziqi Zhang and Lei Luo. Hate speech detection: A solved problem? the challenging case of long tail on Twitter. Semantic Web, 10(5):925-945, 2019.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Overview of the evalita EVALITA hate speech detection task",
"authors": [
{
"first": "Cristina",
"middle": [],
"last": "Bosco",
"suffix": ""
},
{
"first": "Fabio",
"middle": [],
"last": "Dell'orletta Felice",
"suffix": ""
},
{
"first": "Manuela",
"middle": [],
"last": "Poletto",
"suffix": ""
},
{
"first": "Tesconi",
"middle": [],
"last": "Sanguinetti",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Maurizio",
"suffix": ""
}
],
"year": 2018,
"venue": "EVALITA 2018-Sixth Evaluation Campaign of Natural Language Processing and Speech Tools for Italian",
"volume": "2263",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cristina Bosco, Dell'Orletta Felice, Fabio Poletto, Manuela Sanguinetti, and Tesconi Maurizio. Overview of the evalita EVALITA hate speech detection task. In EVALITA 2018- Sixth Evaluation Campaign of Natural Language Process- ing and Speech Tools for Italian, volume 2263. CEUR, 2018.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Hate speech dataset from a white supremacy forum",
"authors": [
{
"first": "Ona",
"middle": [],
"last": "De Gibert",
"suffix": ""
},
{
"first": "Naiara",
"middle": [],
"last": "Perez",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2nd Workshop on Abusive Language Online (ALW2)",
"volume": "",
"issue": "",
"pages": "11--20",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ona de Gibert, Naiara Perez, Aitor Garc\u00eda-Pablos, and Montse Cuadros. Hate speech dataset from a white supremacy forum. In Proceedings of the 2nd Workshop on Abusive Language Online (ALW2), pages 11-20. Association for Computational Linguistics, 2018.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Predictive embeddings for hate speech detection on Twitter",
"authors": [
{
"first": "Rohan",
"middle": [],
"last": "Kshirsagar",
"suffix": ""
},
{
"first": "Tyus",
"middle": [],
"last": "Cukuvac",
"suffix": ""
},
{
"first": "Kathleen",
"middle": [],
"last": "Mckeown",
"suffix": ""
},
{
"first": "Susan",
"middle": [],
"last": "Mcgregor",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2nd Workshop on Abusive Language Online (ALW2)",
"volume": "",
"issue": "",
"pages": "26--32",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rohan Kshirsagar, Tyus Cukuvac, Kathleen McKeown, and Susan McGregor. Predictive embeddings for hate speech detection on Twitter. In Proceedings of the 2nd Workshop on Abusive Language Online (ALW2), pages 26-32. Asso- ciation for Computational Linguistics, 2018.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Hate speech detection: Challenges and solutions",
"authors": [
{
"first": "Sean",
"middle": [],
"last": "Macavaney",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hao-Ren",
"suffix": ""
},
{
"first": "Eugene",
"middle": [],
"last": "Yao",
"suffix": ""
},
{
"first": "Katina",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Nazli",
"middle": [],
"last": "Russell",
"suffix": ""
},
{
"first": "Ophir",
"middle": [],
"last": "Goharian",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Frieder",
"suffix": ""
}
],
"year": 2019,
"venue": "PLOS ONE",
"volume": "14",
"issue": "8",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sean MacAvaney, Hao-Ren Yao, Eugene Yang, Katina Russell, Nazli Goharian, and Ophir Frieder. Hate speech detection: Challenges and solutions. PLOS ONE, 14(8), 2019.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Challenges in discriminating profanity from hate speech",
"authors": [
{
"first": "Shervin",
"middle": [],
"last": "Malmasi",
"suffix": ""
},
{
"first": "Marcos",
"middle": [],
"last": "Zampieri",
"suffix": ""
}
],
"year": 2018,
"venue": "J. Exp. Theor. Artif. Intell",
"volume": "30",
"issue": "2",
"pages": "187--202",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shervin Malmasi and Marcos Zampieri. Challenges in dis- criminating profanity from hate speech. J. Exp. Theor. Artif. Intell., 30(2):187-202, 2018.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Effective hate-speech detection in Twitter data using recurrent neural networks",
"authors": [
{
"first": "Heri",
"middle": [],
"last": "Georgios K Pitsilis",
"suffix": ""
},
{
"first": "Helge",
"middle": [],
"last": "Ramampiaro",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Langseth",
"suffix": ""
}
],
"year": 2018,
"venue": "Appl. Intell",
"volume": "48",
"issue": "12",
"pages": "4730--4742",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Georgios K Pitsilis, Heri Ramampiaro, and Helge Langseth. Effective hate-speech detection in Twitter data using re- current neural networks. Appl. Intell., 48(12):4730-4742, 2018.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Detection of hate speech in social networks: a survey on multilingual corpus",
"authors": [
{
"first": "Areej",
"middle": [],
"last": "Al-Hassan",
"suffix": ""
},
{
"first": "Hmood",
"middle": [],
"last": "Al-Dossari",
"suffix": ""
}
],
"year": 2019,
"venue": "6th International Conference on Computer Science and Information Technology",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Areej Al-Hassan and Hmood Al-Dossari. Detection of hate speech in social networks: a survey on multilingual corpus. In 6th International Conference on Computer Science and Information Technology, 2019.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Detecting weak and strong islamophobic hate speech on social media",
"authors": [
{
"first": "Bertie",
"middle": [],
"last": "Vidgen",
"suffix": ""
},
{
"first": "Taha",
"middle": [],
"last": "Yasseri",
"suffix": ""
}
],
"year": 2020,
"venue": "J. Inf. Technol. Politics",
"volume": "17",
"issue": "1",
"pages": "66--78",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bertie Vidgen and Taha Yasseri. Detecting weak and strong islamophobic hate speech on social media. J. Inf. Technol. Politics, 17(1):66-78, 2020.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Improving hate speech detection with deep learning ensembles",
"authors": [
{
"first": "Steven",
"middle": [],
"last": "Zimmerman",
"suffix": ""
},
{
"first": "Udo",
"middle": [],
"last": "Kruschwitz",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Fox",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Steven Zimmerman, Udo Kruschwitz, and Chris Fox. Improv- ing hate speech detection with deep learning ensembles. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018), 2018.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Distributed representations of sentences and documents",
"authors": [
{
"first": "Quoc",
"middle": [],
"last": "Le",
"suffix": ""
},
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
}
],
"year": 2014,
"venue": "International conference on machine learning",
"volume": "",
"issue": "",
"pages": "1188--1196",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Quoc Le and Tomas Mikolov. Distributed representations of sentences and documents. In International conference on machine learning, pages 1188-1196, 2014.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Glove: Global vectors for word representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher D",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 conference on empirical methods in natural language processing",
"volume": "",
"issue": "",
"pages": "1532--1543",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey Pennington, Richard Socher, and Christopher D Man- ning. Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing, pages 1532-1543, 2014.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of NAACL-HLT 2019",
"volume": "",
"issue": "",
"pages": "4171--4186",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: Pre-training of deep bidirectional trans- formers for language understanding. In Proceedings of NAACL-HLT 2019, pages 4171--4186, 2019.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Automated hate speech detection and the problem of offensive language",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Davidson",
"suffix": ""
},
{
"first": "Dana",
"middle": [],
"last": "Warmsley",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Macy",
"suffix": ""
},
{
"first": "Ingmar",
"middle": [],
"last": "Weber",
"suffix": ""
}
],
"year": 2017,
"venue": "Eleventh international AAAI conference on Web and social media",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas Davidson, Dana Warmsley, Michael Macy, and In- gmar Weber. Automated hate speech detection and the problem of offensive language. In Eleventh international AAAI conference on Web and social media, 2017.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Chris Loo, and Saurav Sahay. Technology solutions to combat online harassment",
"authors": [
{
"first": "George",
"middle": [],
"last": "Kennedy",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mccollough",
"suffix": ""
},
{
"first": "Edward",
"middle": [],
"last": "Dixon",
"suffix": ""
},
{
"first": "Alexei",
"middle": [],
"last": "Bastidas",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Ryan",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the first workshop on abusive language online",
"volume": "",
"issue": "",
"pages": "73--77",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "George Kennedy, Andrew McCollough, Edward Dixon, Alexei Bastidas, John Ryan, Chris Loo, and Saurav Sa- hay. Technology solutions to combat online harassment. In Proceedings of the first workshop on abusive language online, pages 73-77, 2017.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "A survey on hate speech detection using natural language processing",
"authors": [
{
"first": "Anna",
"middle": [],
"last": "Schmidt",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Wiegand",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the Fifth International Workshop on Natural Language Processing for Social Media",
"volume": "",
"issue": "",
"pages": "1--10",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anna Schmidt and Michael Wiegand. A survey on hate speech detection using natural language processing. In Proceed- ings of the Fifth International Workshop on Natural Lan- guage Processing for Social Media, pages 1-10, 2017.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Software framework for topic modelling with large corpora",
"authors": [
{
"first": "Petr",
"middle": [],
"last": "Radim\u0159eh\u016f\u0159ek",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Sojka",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks",
"volume": "",
"issue": "",
"pages": "45--50",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Radim\u0158eh\u016f\u0159ek and Petr Sojka. Software framework for topic modelling with large corpora. In Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pages 45-50, Valletta, Malta, 2010. ELRA.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "An empirical evaluation of doc2vec with practical insights into document embedding generation",
"authors": [
{
"first": "Han",
"middle": [],
"last": "Jey",
"suffix": ""
},
{
"first": "Timothy",
"middle": [],
"last": "Lau",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Baldwin",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 1st Workshop on Representation Learning for NLP",
"volume": "",
"issue": "",
"pages": "78--86",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jey Han Lau and Timothy Baldwin. An empirical evaluation of doc2vec with practical insights into document embed- ding generation. In Proceedings of the 1st Workshop on Representation Learning for NLP, pages 78-86. Associa- tion for Computational Linguistics, 2016.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "Scikit-learn: Machine learning in Python",
"authors": [
{
"first": "F",
"middle": [],
"last": "Pedregosa",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Varoquaux",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Gramfort",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Michel",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Thirion",
"suffix": ""
},
{
"first": "O",
"middle": [],
"last": "Grisel",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Blondel",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Prettenhofer",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Weiss",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Dubourg",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Vanderplas",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Passos",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Cournapeau",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Brucher",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Perrot",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Duchesnay",
"suffix": ""
}
],
"year": 2011,
"venue": "J. Mach. Learn. Res",
"volume": "12",
"issue": "",
"pages": "2825--2830",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. Scikit-learn: Machine learning in Python. J. Mach. Learn. Res., 12: 2825-2830, 2011.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "Right-wing German hate speech on Twitter: Analysis and automatic detection",
"authors": [
{
"first": "Sylvia",
"middle": [],
"last": "Jaki",
"suffix": ""
},
{
"first": "Tom",
"middle": [
"De"
],
"last": "Smedt",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1910.07518"
]
},
"num": null,
"urls": [],
"raw_text": "Sylvia Jaki and Tom De Smedt. Right-wing German hate speech on Twitter: Analysis and automatic detection. arXiv:1910.07518, 2019.",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "xgboost: A scalable tree boosting system",
"authors": [
{
"first": "Tianqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Carlos",
"middle": [],
"last": "Guestrin",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining",
"volume": "",
"issue": "",
"pages": "785--794",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tianqi Chen and Carlos Guestrin. xgboost: A scalable tree boosting system. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 785-794, 2016.",
"links": null
},
"BIBREF46": {
"ref_id": "b46",
"title": "Impact and dynamics of hate and counter speech online",
"authors": [
{
"first": "Joshua",
"middle": [],
"last": "Garland",
"suffix": ""
},
{
"first": "Keyan",
"middle": [],
"last": "Ghazi-Zahedi",
"suffix": ""
},
{
"first": "Jean-Gabriel",
"middle": [],
"last": "Young",
"suffix": ""
},
{
"first": "Laurent",
"middle": [],
"last": "H\u00e9bert-Dufresne",
"suffix": ""
},
{
"first": "Mirta",
"middle": [],
"last": "Galesic",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2009.08392"
]
},
"num": null,
"urls": [],
"raw_text": "Joshua Garland, Keyan Ghazi-Zahedi, Jean-Gabriel Young, Laurent H\u00e9bert-Dufresne, and Mirta Galesic. Impact and dynamics of hate and counter speech online. arXiv preprint arXiv:2009.08392, 2020.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "Human judgment of hate and counter speech corresponds to automated classification (panel of 25 experts). Average human judgments of tweets classified as counter speech by our method are shown in blue (left-half), and judgments for tweets classified as hate are shown in red (right-half). Individual human judgments are averaged across bins of width 0.02 of classifier scores for the original tweet. Error bars represent \u00b1 one standard error. exceeding = 1/2. As"
},
"FIGREF2": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "Proportion of hate, counter, and other speech in reply trees from 2013-2018, using a = 0.75 threshold (left panel), and average hate and counter score of tweets exceeding the = 1/2 threshold (right panel). After the establishment of RI in April 2018, the proportion of counter speech increases, and the ongoing increase in polarization is slowed down as indicated by a decrease in average hate and counter scores."
}
}
}
}