ACL-OCL / Base_JSON /prefixI /json /insights /2020.insights-1.8.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
raw
history blame contribute delete
No virus
63.1 kB
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:13:23.535064Z"
},
"title": "Label Propagation-Based Semi-Supervised Learning for Hate Speech Classification",
"authors": [
{
"first": "Ashwin",
"middle": [],
"last": "Geet D'sa",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "CNRS",
"location": {
"region": "Inria",
"country": "LORIA"
}
},
"email": ""
},
{
"first": "Irina",
"middle": [],
"last": "Illina",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "CNRS",
"location": {
"region": "Inria",
"country": "LORIA"
}
},
"email": ""
},
{
"first": "Dominique",
"middle": [],
"last": "Fohr",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "CNRS",
"location": {
"region": "Inria",
"country": "LORIA"
}
},
"email": ""
},
{
"first": "Dietrich",
"middle": [],
"last": "Klakow",
"suffix": "",
"affiliation": {
"laboratory": "Spoken Language System Group",
"institution": "Saarland University",
"location": {}
},
"email": ""
},
{
"first": "Dana",
"middle": [],
"last": "Ruiter",
"suffix": "",
"affiliation": {
"laboratory": "Spoken Language System Group",
"institution": "Saarland University",
"location": {}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Research on hate speech classification has received increased attention. In real-life scenarios, a small amount of labeled hate speech data is available to train a reliable classifier. Semi-supervised learning takes advantage of a small amount of labeled data and a large amount of unlabeled data. In this paper, label propagation-based semi-supervised learning is explored for the task of hate speech classification. The quality of labeling the unlabeled set depends on the input representations. In this work, we show that pre-trained representations are label agnostic, and when used with label propagation yield poor results. Neural network-based fine-tuning can be adopted to learn task-specific representations using a small amount of labeled data. We show that fully fine-tuned representations may not always be the best representations for the label propagation and intermediate representations may perform better in a semi-supervised setup.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "Research on hate speech classification has received increased attention. In real-life scenarios, a small amount of labeled hate speech data is available to train a reliable classifier. Semi-supervised learning takes advantage of a small amount of labeled data and a large amount of unlabeled data. In this paper, label propagation-based semi-supervised learning is explored for the task of hate speech classification. The quality of labeling the unlabeled set depends on the input representations. In this work, we show that pre-trained representations are label agnostic, and when used with label propagation yield poor results. Neural network-based fine-tuning can be adopted to learn task-specific representations using a small amount of labeled data. We show that fully fine-tuned representations may not always be the best representations for the label propagation and intermediate representations may perform better in a semi-supervised setup.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Online hate speech is anti-social communicative behavior and targets minority sections of the society based on religion, ethnicity, gender, etc. (Delgado and Stefancic, 2014) . It leads to threat, fear, and violence to an individual or a group. As monitoring these contents by humans is expensive and timeconsuming, machine learning-based classification techniques can be used. The last few years have seen a tremendous increase in research towards hate speech classification (Badjatiya et al., 2017; Nobata et al., 2016; Del Vigna et al., 2017; Malmasi and Zampieri, 2018) . The performance of these classifiers depends on the amount of available labeled data. However, in many real life scenarios there is a limited amount of labeled data and abundant unlabeled data. In need of data, different data augmentation techniques based on synonym replacement (Rizos et al., 2019 ), text generation (Rizos et al., 2019; Wullach et al., 2020) , back translation (Aroyehun and Gelbukh, 2018), knowledge graphs (Sharifirad et al., 2018) , etc, have been employed for up-sampling the training data in the field of hate speech classification. The performance gain by these techniques is small, and they fail to take advantage of the available unlabeled data.",
"cite_spans": [
{
"start": 145,
"end": 174,
"text": "(Delgado and Stefancic, 2014)",
"ref_id": "BIBREF7"
},
{
"start": 476,
"end": 500,
"text": "(Badjatiya et al., 2017;",
"ref_id": "BIBREF2"
},
{
"start": 501,
"end": 521,
"text": "Nobata et al., 2016;",
"ref_id": "BIBREF11"
},
{
"start": 522,
"end": 545,
"text": "Del Vigna et al., 2017;",
"ref_id": "BIBREF6"
},
{
"start": 546,
"end": 573,
"text": "Malmasi and Zampieri, 2018)",
"ref_id": "BIBREF10"
},
{
"start": 855,
"end": 874,
"text": "(Rizos et al., 2019",
"ref_id": "BIBREF12"
},
{
"start": 894,
"end": 914,
"text": "(Rizos et al., 2019;",
"ref_id": "BIBREF12"
},
{
"start": 915,
"end": 936,
"text": "Wullach et al., 2020)",
"ref_id": "BIBREF15"
},
{
"start": 1003,
"end": 1028,
"text": "(Sharifirad et al., 2018)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Semi-supervised learning is a technique to combine a small amount of labeled data with a large amount of unlabeled data during training (Abney, 2007) , intending to improve the performance of the classifiers. Label propagation (Xiaojin and Zoubin, 2002) is a graph-based semi-supervision technique analogous to the k-Nearest-Neighbours algorithm. It assumes that data points close to each other tend to have a similar label. These algorithms rely on the representation of data points to create a distance graph which captures their proximity.",
"cite_spans": [
{
"start": 136,
"end": 149,
"text": "(Abney, 2007)",
"ref_id": "BIBREF0"
},
{
"start": 227,
"end": 253,
"text": "(Xiaojin and Zoubin, 2002)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Recently, pre-trained word embeddings such as Word2Vec, fastText, Global Vectors for Word Representation (GloVe) have been used for representing words for hate speech classification (Waseem et al., 2017; Badjatiya et al., 2017) . Furthermore, pre-trained sentence embeddings such as InferSent, Universal Sentence Encoder, Embeddings from Language Models (ELMo) have been used for the task of hate speech classification (Indurthi et al., 2019; Bojkovsk\u1ef3 and Pikuliak, 2019) . These pretrained sentence embeddings are generic representations and are unaware of task-specific classes. Transforming the pre-trained sentence embeddings to task-specific representations can be helpful for label propagation, and our work explores this direction. The contributions of this article are:",
"cite_spans": [
{
"start": 182,
"end": 203,
"text": "(Waseem et al., 2017;",
"ref_id": "BIBREF14"
},
{
"start": 204,
"end": 227,
"text": "Badjatiya et al., 2017)",
"ref_id": "BIBREF2"
},
{
"start": 419,
"end": 442,
"text": "(Indurthi et al., 2019;",
"ref_id": "BIBREF9"
},
{
"start": 443,
"end": 472,
"text": "Bojkovsk\u1ef3 and Pikuliak, 2019)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 evaluation of label propagation based semisupervised learning for hate speech classification; \u2022 comparison of label propagation on pretrained and task-specific representations learned from a small labeled corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The rest of the paper is organized as follows: Section 2 describes label-propagation based semisupervised learning for hate speech classification. Section 3 describes the experimental setup. Results and discussions are presented in Section 4.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this section, we briefly describe sentence embeddings and the label propagation algorithm. This is followed by our methodology for semi-supervised training for hate speech classification.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semi-supervised Learning",
"sec_num": "2"
},
{
"text": "Sentence embeddings are fixed-length vector representations that capture the semantics of the sentence. These embeddings are learned from large unlabeled corpora. Similar sentences are close to each other in this vector space and hence they are used as an input representation for various downstream tasks. We use the pre-trained Universal Sentence Encoder (USE) (Cer et al., 2018) to represent the tweets.",
"cite_spans": [
{
"start": 363,
"end": 381,
"text": "(Cer et al., 2018)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence Embeddings",
"sec_num": "2.1"
},
{
"text": "Label propagation is a graph-based semisupervised learning technique that uses the labels from the labeled data to transduce the labels to unlabeled data. Label propagation considers two sets:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Label Propagation",
"sec_num": "2.2"
},
{
"text": "(x 1 , y 1 ) . . . (x l , y l ) \u2208 L as labeled set and (x l+1 , y l+1 ) . . . (x n , y n ) \u2208 U as unlabeled set, where y 1 . . . y l \u2208 {1 . . . C}, {x 1 . . . x n } \u2208 R D .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Label Propagation",
"sec_num": "2.2"
},
{
"text": "Here, C is the number of classes, and the labeled set L consists of all the classes. The algorithm constructs a graph G = (V, E), where V is the set of vertices representing set L and U , and the edges in set E represents the similarity between two nodes i and j with weight w ij . The weight w ij is computed such that nodes with smaller distances (similar nodes) will have larger weights. The algorithm uses a probabilistic transition matrix T :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Label Propagation",
"sec_num": "2.2"
},
{
"text": "T ij = P (i \u2192 j) = w ij n k=1 w kj",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Label Propagation",
"sec_num": "2.2"
},
{
"text": "The algorithm iteratively updates the labels Y \u2190 T Y , by clamping the labels of the labeled set, and until Y converges. Figure 1 outlines the semi-supervised learning setup adopted in our study. We use a Multilayer Perceptron (MLP) for two purposes: (a) to learn task-specific representations; (b) to perform multiclass classification. Task-specific representations: First, the pretrained representation S \u2208 R D are transformed to task-specific representation\u015c \u2208 RD with the MLP classifier trained using a small amount of available labeled set L. After training the MLP classifier, we pass the label agnostic pre-trained representation S of a given sample from the labeled set L and unlabeled set U as input to the MLP. We consider the activation from outputs of the hidden layers (h1 and h2) as two different task-specific transformed rep-resentations\u015c. Since the MLP classifier is trained with labeled data, we expect the representations h1 and h2 to capture task-specific label information.",
"cite_spans": [],
"ref_spans": [
{
"start": 121,
"end": 129,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Label Propagation",
"sec_num": "2.2"
},
{
"text": "Semi-supervised training: The pre-trained representations S or task-specific representations\u015c will be used to represent data points in label propagation. We perform label propagation using the labeled set L and unlabeled set U , to obtain the labels for the samples in U . Finally, the pre-trained embeddings of set L and set U , along with original labels for L and labels obtained from label propagation for U will be used to train an MLP for hate speech classification.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed Methodology",
"sec_num": "2.3"
},
{
"text": "We consider two datasets, by Founta et al. 2018and Davidson et al. (2017) , containing tweets sampled from Twitter. 3.2 Data Processing",
"cite_spans": [
{
"start": 51,
"end": 73,
"text": "Davidson et al. (2017)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data Description",
"sec_num": "3.1"
},
{
"text": "We remove all the numbers and punctuations except '.', ',', '!', '-', and apostrophe. The repeated occurrence of the same punctuation is changed to a single one. Hashtags are preprocessed by removing the '#' symbol and those with multiple words are split based on uppercase letters. For example, \"#getBackHome\" is processed into \"get Back Home\". This is done to ensure that the text tokenizer treats multiple words as a sequence of distinct words. However, the hashtags with multiple words without any uppercase is left as it is. Twitter user handles and the symbol 'RT' which indicates re-tweet are removed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Text Preprocessing",
"sec_num": "3.2.1"
},
{
"text": "We split the datasets randomly into three portions 'training', 'validation', and 'test' sets, each containing 60%, 20%, and 20% respectively. To simulate the semi-supervised learning setup, we partition the training set into labeled and unlabeled set with a ratio of 1:4. The final labeled set consists of 12% of the entire dataset. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Split",
"sec_num": "3.2.2"
},
{
"text": "We obtain pre-trained sentence embeddings from the transformer-based model of USE 2 . This model is trained on data from Wikipedia, web news, web question-answer pages, and discussion forums. Each sentence embedding is of 512 dimension.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence Embeddings",
"sec_num": "3.3"
},
{
"text": "The label propagation API provided by scikit-learn library 3 is used. Euclidean distance is used to compute the distances between two data points. Based on the validation set, we have chosen 80 nearest neighbors and a maximum of 4 iterations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Label Propagation",
"sec_num": "3.4"
},
{
"text": "We use an MLP with two hidden layers to perform classification and derive transformed representations. As shown in the Figure 1 , the MLP model always has pre-trained representations S as its input. The hidden layers have ReLU activation. The transformed representation\u015c are taken from the 1st or 2nd hidden layers, referred as 'h1-representation' and 'h2-representation' in our experiments, respectively. We use Adam optimizer, early-stopping based on validation set, and maximum of 10 epochs. Both the hidden layers have 50 units. The model weights learnt while training the system with only the labeled data is used to initialize the classifier before training with labeled and unlabeled data. data is fixed, but the amount of labeled data is varied to 10%, 20%, 30%, 50% and 100% of the available labeled set. The 'Baseline' is obtained by training the MLP classifier with only the labeled set, and without using label propagation. The 'Pre-trained', 'h1', and 'h2' show the results of the classifier trained using labeled and unlabeled set, the respective representation was used to label the unlabeled set using label propagation. In all the four cases, pre-trained embeddings were used as input to the MLP classifier.",
"cite_spans": [],
"ref_spans": [
{
"start": 119,
"end": 127,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Model and Representation Setup",
"sec_num": "3.5"
},
{
"text": "Results in Figure 2 and Figure 3 show that semisupervised training using label propagation on pretrained representations performs worse than the 'Baseline' classifier. This implies that label propagation to the unlabeled sets using pre-trained representations introduces significant noise in the classifier. To support this hypothesis, we analyse the intra-class and inter-class separations.",
"cite_spans": [],
"ref_spans": [
{
"start": 11,
"end": 19,
"text": "Figure 2",
"ref_id": "FIGREF1"
},
{
"start": 24,
"end": 32,
"text": "Figure 3",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Model and Representation Setup",
"sec_num": "3.5"
},
{
"text": "pre-trained h1 h2 intra-class distance 'normal'",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model and Representation Setup",
"sec_num": "3.5"
},
{
"text": "1.33 0.89 1.48 'abusive'",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model and Representation Setup",
"sec_num": "3.5"
},
{
"text": "1.26 0.81 1.43 'hate'",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model and Representation Setup",
"sec_num": "3.5"
},
{
"text": "1.25 0.87 1.63 inter-class distance 'normal'-'abusive'",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model and Representation Setup",
"sec_num": "3.5"
},
{
"text": "1.33 1.17 2.72 'normal'-'hate'",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model and Representation Setup",
"sec_num": "3.5"
},
{
"text": "1.33 1.14 2.39 'abusive'-'hate'",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model and Representation Setup",
"sec_num": "3.5"
},
{
"text": "1.33 1.14 2.99 Table 2 shows the average intra-class and interclass euclidean distance across different representations on the training set, containing the labeled and unlabeled set. Ground truth labels are used for this analysis. From Table 2 , we observe that the intra-class and inter-class distance are similar for pre-trained embeddings. This implies that the representations belonging to the samples from the same class were not close to each other and those from different classes were not far from each other. As hypothesised, h1 and h2 capture class information and hence have lower intra-class and higher inter-class distances.",
"cite_spans": [],
"ref_spans": [
{
"start": 15,
"end": 22,
"text": "Table 2",
"ref_id": "TABREF2"
},
{
"start": 236,
"end": 243,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Model and Representation Setup",
"sec_num": "3.5"
},
{
"text": "Further, we qualitatively analyze the euclidean distances across pre-trained representation and h1 representation using color maps, shown in Figure 4 and Figure 5 respectively. We randomly consider 300 samples from Founta et al. (2018) dataset, of which the 1 st set of 100 samples belongs to the labeled set of 'normal' class, the 2 nd and the 3 rd set of 100 samples belong to the labeled sets of 'abusive', and 'hate' classes respectively. From Figure 4 , we observe that the pre-trained representation has similar distance across all the samples, thus appearing in a similar color. This infers that pre-trained representation does not contain any task-specific label information, and are generic. However, as shown in Figure 5 , the intra-class distance of the samples within 'normal' and 'abusive' classes is smaller than their inter-class distance of h1 representation, hence appearing as lighter-colored squares. From these figures, we observe that h1 representation captures task-specific label information. Figure 2 and Figure 3 further show that semisupervised training using label propagation on transformed representations ('h1' and 'h2') performs better than the 'Baseline' when amount of labeled data is small. The performance is comparable for larger labeled data. Thus, semi-supervised learning with label propagation using task-specific representations can have significant advantages when the available labeled samples are very few.",
"cite_spans": [],
"ref_spans": [
{
"start": 141,
"end": 150,
"text": "Figure 4",
"ref_id": "FIGREF4"
},
{
"start": 155,
"end": 163,
"text": "Figure 5",
"ref_id": "FIGREF5"
},
{
"start": 449,
"end": 457,
"text": "Figure 4",
"ref_id": "FIGREF4"
},
{
"start": 723,
"end": 731,
"text": "Figure 5",
"ref_id": "FIGREF5"
},
{
"start": 1017,
"end": 1025,
"text": "Figure 2",
"ref_id": "FIGREF1"
},
{
"start": 1030,
"end": 1038,
"text": "Figure 3",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Model and Representation Setup",
"sec_num": "3.5"
},
{
"text": "Furthermore, in a few cases, label propagation using the representations from the intermediate hidden layer 'h1' performed slightly better than label propagation using the representations from the final hidden layer 'h2'. This hints that fully finetuned representation may not always be the best performing representations in the k-Nearest Neighbors space.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model and Representation Setup",
"sec_num": "3.5"
},
{
"text": "In this article, we have explored label propagationbased semi-supervised learning for hate speech classification. We evaluated our approach on two datasets of hate speech on the multi-class classification task. We showed that label propagation using the pre-trained sentence embeddings reduces the performance achieved with only the labeled data. This is because pre-trained embeddings do not contain any task-specific information. We validated this by comparing the average intra-class and interclass distances. Further, training an MLP classifier with a small amount of labeled data and using the activations of its hidden layers as task aware representations improved the performance of the label propagation and semi-supervised training. It also appears that the fully fine-tuned representations from the MLP may not be the best representations for the label propagation. We can conclude that semi-supervised learning based on label propagation helps to improve hate speech classification in very low resource scenarios and that the performance gain reduces with more amount of labeled data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "https://tfhub.dev/google/ universal-sentence-encoder-large/33 https://scikit-learn.org/stable/ modules/label_propagation.html",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This work was funded by the M-PHASIS project supported by the French National Research Agency (ANR) and German National Research Agency (DFG) under contract ANR-18-FRAL-0005.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Semi-supervised learning for computational linguistics",
"authors": [
{
"first": "Steven",
"middle": [],
"last": "Abney",
"suffix": ""
}
],
"year": 2007,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Steven Abney. 2007. Semi-supervised learning for computational linguistics. CRC Press.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Aggression detection in social media: Using deep neural networks, data augmentation, and pseudo labeling",
"authors": [
{
"first": "Alexander",
"middle": [],
"last": "Segun Taofeek Aroyehun",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Gelbukh",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the First Workshop on Trolling, Aggression and Cyberbullying (TRAC-2018)",
"volume": "",
"issue": "",
"pages": "90--97",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Segun Taofeek Aroyehun and Alexander Gelbukh. 2018. Aggression detection in social media: Us- ing deep neural networks, data augmentation, and pseudo labeling. In Proceedings of the First Work- shop on Trolling, Aggression and Cyberbullying (TRAC-2018), pages 90-97.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Deep learning for hate speech detection in tweets",
"authors": [
{
"first": "Pinkesh",
"middle": [],
"last": "Badjatiya",
"suffix": ""
},
{
"first": "Shashank",
"middle": [],
"last": "Gupta",
"suffix": ""
},
{
"first": "Manish",
"middle": [],
"last": "Gupta",
"suffix": ""
},
{
"first": "Vasudeva",
"middle": [],
"last": "Varma",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 26th International Conference on World Wide Web Companion",
"volume": "",
"issue": "",
"pages": "759--760",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pinkesh Badjatiya, Shashank Gupta, Manish Gupta, and Vasudeva Varma. 2017. Deep learning for hate speech detection in tweets. In Proceedings of the 26th International Conference on World Wide Web Companion, pages 759-760.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Stufiit at semeval-2019 task 5: Multilingual hate speech detection on twitter with muse and elmo embeddings",
"authors": [
{
"first": "Michal",
"middle": [],
"last": "Bojkovsk\u1ef3",
"suffix": ""
},
{
"first": "Mat\u00fa\u0161",
"middle": [],
"last": "Pikuliak",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 13th International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "464--468",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michal Bojkovsk\u1ef3 and Mat\u00fa\u0161 Pikuliak. 2019. Stufiit at semeval-2019 task 5: Multilingual hate speech de- tection on twitter with muse and elmo embeddings. In Proceedings of the 13th International Workshop on Semantic Evaluation, pages 464-468.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Universal sentence encoder for english",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Cer",
"suffix": ""
},
{
"first": "Yinfei",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Sheng-Yi",
"middle": [],
"last": "Kong",
"suffix": ""
},
{
"first": "Nan",
"middle": [],
"last": "Hua",
"suffix": ""
},
{
"first": "Nicole",
"middle": [],
"last": "Limtiaco",
"suffix": ""
},
{
"first": "Rhomni",
"middle": [],
"last": "St John",
"suffix": ""
},
{
"first": "Noah",
"middle": [],
"last": "Constant",
"suffix": ""
},
{
"first": "Mario",
"middle": [],
"last": "Guajardo-Cespedes",
"suffix": ""
},
{
"first": "Steve",
"middle": [],
"last": "Yuan",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Tar",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
"volume": "",
"issue": "",
"pages": "169--174",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel Cer, Yinfei Yang, Sheng-yi Kong, Nan Hua, Nicole Limtiaco, Rhomni St John, Noah Constant, Mario Guajardo-Cespedes, Steve Yuan, Chris Tar, et al. 2018. Universal sentence encoder for english. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 169-174.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Automated hate speech detection and the problem of offensive language",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Davidson",
"suffix": ""
},
{
"first": "Dana",
"middle": [],
"last": "Warmsley",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Macy",
"suffix": ""
},
{
"first": "Ingmar",
"middle": [],
"last": "Weber",
"suffix": ""
}
],
"year": 2017,
"venue": "Eleventh international aaai conference on web and social media",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas Davidson, Dana Warmsley, Michael Macy, and Ingmar Weber. 2017. Automated hate speech detection and the problem of offensive language. In Eleventh international aaai conference on web and social media.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Hate me, hate me not: Hate speech detection on facebook",
"authors": [
{
"first": "Fabio",
"middle": [
"Del"
],
"last": "Vigna",
"suffix": ""
},
{
"first": "Andrea",
"middle": [],
"last": "Cimino",
"suffix": ""
},
{
"first": "Felice",
"middle": [],
"last": "Dell'orletta",
"suffix": ""
},
{
"first": "Marinella",
"middle": [],
"last": "Petrocchi",
"suffix": ""
},
{
"first": "Maurizio",
"middle": [],
"last": "Tesconi",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the First Italian Conference on Cybersecurity",
"volume": "",
"issue": "",
"pages": "86--95",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fabio Del Vigna, Andrea Cimino, Felice Dell'Orletta, Marinella Petrocchi, and Maurizio Tesconi. 2017. Hate me, hate me not: Hate speech detection on face- book. In Proceedings of the First Italian Conference on Cybersecurity, pages 86-95.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Hate speech in cyberspace",
"authors": [
{
"first": "Richard",
"middle": [],
"last": "Delgado",
"suffix": ""
},
{
"first": "Jean",
"middle": [],
"last": "Stefancic",
"suffix": ""
}
],
"year": 2014,
"venue": "Wake Forest L. Rev",
"volume": "49",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Richard Delgado and Jean Stefancic. 2014. Hate speech in cyberspace. Wake Forest L. Rev., 49:319.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Large scale crowdsourcing and characterization of twitter abusive behavior",
"authors": [
{
"first": "Constantinos",
"middle": [],
"last": "Antigoni Maria Founta",
"suffix": ""
},
{
"first": "Despoina",
"middle": [],
"last": "Djouvas",
"suffix": ""
},
{
"first": "Ilias",
"middle": [],
"last": "Chatzakou",
"suffix": ""
},
{
"first": "Jeremy",
"middle": [],
"last": "Leontiadis",
"suffix": ""
},
{
"first": "Gianluca",
"middle": [],
"last": "Blackburn",
"suffix": ""
},
{
"first": "Athena",
"middle": [],
"last": "Stringhini",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Vakali",
"suffix": ""
},
{
"first": "Nicolas",
"middle": [],
"last": "Sirivianos",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kourtellis",
"suffix": ""
}
],
"year": 2018,
"venue": "Twelfth International AAAI Conference on Web and Social Media",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Antigoni Maria Founta, Constantinos Djouvas, De- spoina Chatzakou, Ilias Leontiadis, Jeremy Black- burn, Gianluca Stringhini, Athena Vakali, Michael Sirivianos, and Nicolas Kourtellis. 2018. Large scale crowdsourcing and characterization of twitter abusive behavior. In Twelfth International AAAI Conference on Web and Social Media.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Fermi at semeval-2019 task 5: Using sentence embeddings to identify hate speech against immigrants and women in twitter",
"authors": [
{
"first": "Vijayasaradhi",
"middle": [],
"last": "Indurthi",
"suffix": ""
},
{
"first": "Bakhtiyar",
"middle": [],
"last": "Syed",
"suffix": ""
},
{
"first": "Manish",
"middle": [],
"last": "Shrivastava",
"suffix": ""
},
{
"first": "Nikhil",
"middle": [],
"last": "Chakravartula",
"suffix": ""
},
{
"first": "Manish",
"middle": [],
"last": "Gupta",
"suffix": ""
},
{
"first": "Vasudeva",
"middle": [],
"last": "Varma",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 13th International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "70--74",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vijayasaradhi Indurthi, Bakhtiyar Syed, Manish Shri- vastava, Nikhil Chakravartula, Manish Gupta, and Vasudeva Varma. 2019. Fermi at semeval-2019 task 5: Using sentence embeddings to identify hate speech against immigrants and women in twitter. In Proceedings of the 13th International Workshop on Semantic Evaluation, pages 70-74.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Challenges in discriminating profanity from hate speech",
"authors": [
{
"first": "Shervin",
"middle": [],
"last": "Malmasi",
"suffix": ""
},
{
"first": "Marcos",
"middle": [],
"last": "Zampieri",
"suffix": ""
}
],
"year": 2018,
"venue": "Journal of Experimental & Theoretical Artificial Intelligence",
"volume": "30",
"issue": "2",
"pages": "187--202",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shervin Malmasi and Marcos Zampieri. 2018. Chal- lenges in discriminating profanity from hate speech. Journal of Experimental & Theoretical Artificial In- telligence, 30(2):187-202.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Abusive language detection in online user content",
"authors": [
{
"first": "Chikashi",
"middle": [],
"last": "Nobata",
"suffix": ""
},
{
"first": "Joel",
"middle": [],
"last": "Tetreault",
"suffix": ""
},
{
"first": "Achint",
"middle": [],
"last": "Thomas",
"suffix": ""
},
{
"first": "Yashar",
"middle": [],
"last": "Mehdad",
"suffix": ""
},
{
"first": "Yi",
"middle": [],
"last": "Chang",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 25th international conference on world wide web",
"volume": "",
"issue": "",
"pages": "145--153",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chikashi Nobata, Joel Tetreault, Achint Thomas, Yashar Mehdad, and Yi Chang. 2016. Abusive lan- guage detection in online user content. In Proceed- ings of the 25th international conference on world wide web, pages 145-153.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Augment to prevent: short-text data augmentation in deep learning for hate-speech classification",
"authors": [
{
"first": "Georgios",
"middle": [],
"last": "Rizos",
"suffix": ""
},
{
"first": "Konstantin",
"middle": [],
"last": "Hemker",
"suffix": ""
},
{
"first": "Bj\u00f6rn",
"middle": [],
"last": "Schuller",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 28th ACM International Conference on Information and Knowledge Management",
"volume": "",
"issue": "",
"pages": "991--1000",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Georgios Rizos, Konstantin Hemker, and Bj\u00f6rn Schuller. 2019. Augment to prevent: short-text data augmentation in deep learning for hate-speech clas- sification. In Proceedings of the 28th ACM Inter- national Conference on Information and Knowledge Management, pages 991-1000.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Boosting text classification performance on sexist tweets by text augmentation and text generation using a combination of knowledge graphs",
"authors": [
{
"first": "Sima",
"middle": [],
"last": "Sharifirad",
"suffix": ""
},
{
"first": "Borna",
"middle": [],
"last": "Jafarpour",
"suffix": ""
},
{
"first": "Stan",
"middle": [],
"last": "Matwin",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2nd workshop on abusive language online (ALW2)",
"volume": "",
"issue": "",
"pages": "107--114",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sima Sharifirad, Borna Jafarpour, and Stan Matwin. 2018. Boosting text classification performance on sexist tweets by text augmentation and text genera- tion using a combination of knowledge graphs. In Proceedings of the 2nd workshop on abusive lan- guage online (ALW2), pages 107-114.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Proceedings of the first workshop on abusive language online",
"authors": [
{
"first": "Zeerak",
"middle": [],
"last": "Waseem",
"suffix": ""
},
{
"first": "Wendy",
"middle": [
"Hui"
],
"last": "",
"suffix": ""
},
{
"first": "Kyong",
"middle": [],
"last": "Chung",
"suffix": ""
},
{
"first": "Dirk",
"middle": [],
"last": "Hovy",
"suffix": ""
},
{
"first": "Joel",
"middle": [],
"last": "Tetreault",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zeerak Waseem, Wendy Hui Kyong Chung, Dirk Hovy, and Joel Tetreault. 2017. Proceedings of the first workshop on abusive language online. In Proceed- ings of the First Workshop on Abusive Language On- line.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Towards hate speech detection at large via deep generative modeling",
"authors": [
{
"first": "Tomer",
"middle": [],
"last": "Wullach",
"suffix": ""
},
{
"first": "Amir",
"middle": [],
"last": "Adler",
"suffix": ""
},
{
"first": "Einat",
"middle": [],
"last": "Minkov",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2005.06370"
]
},
"num": null,
"urls": [],
"raw_text": "Tomer Wullach, Amir Adler, and Einat Minkov. 2020. Towards hate speech detection at large via deep gen- erative modeling. arXiv preprint arXiv:2005.06370.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Learning from labeled and unlabeled data with label propagation",
"authors": [
{
"first": "Zhu",
"middle": [],
"last": "Xiaojin",
"suffix": ""
},
{
"first": "Ghahramani",
"middle": [],
"last": "Zoubin",
"suffix": ""
}
],
"year": 2002,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhu Xiaojin and Ghahramani Zoubin. 2002. Learning from labeled and unlabeled data with label propaga- tion. Tech. Rep., Technical Report CMU-CALD-02- 107, Carnegie Mellon University.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "Block diagram for semi-supervised learning.",
"type_str": "figure",
"uris": null,
"num": null
},
"FIGREF1": {
"text": "https://www.hatebase.org Performance on Founta et al. (2018) dataset.",
"type_str": "figure",
"uris": null,
"num": null
},
"FIGREF2": {
"text": "and 3 show percentage macro-average F1 of classification on the test set in the semisupervised approach. The amount of unlabeled",
"type_str": "figure",
"uris": null,
"num": null
},
"FIGREF3": {
"text": "Performance onDavidson et al. (2017) dataset.",
"type_str": "figure",
"uris": null,
"num": null
},
"FIGREF4": {
"text": "Color plot for distances between samples using the pre-trained representations.",
"type_str": "figure",
"uris": null,
"num": null
},
"FIGREF5": {
"text": "Color plot for distances between samples using the h1 representations.",
"type_str": "figure",
"uris": null,
"num": null
},
"TABREF0": {
"type_str": "table",
"num": null,
"content": "<table><tr><td colspan=\"5\">Dataset #Samples Normal Abusive Hateful</td></tr><tr><td>Founta</td><td>86.9K</td><td>63%</td><td>31%</td><td>6%</td></tr><tr><td colspan=\"2\">Davidson 24.7K</td><td>17%</td><td>77%</td><td>6%</td></tr></table>",
"html": null,
"text": "shows the statistics of these datasets. Both datasets have an imbalanced class distribution with 'hateful' as a minority class."
},
"TABREF1": {
"type_str": "table",
"num": null,
"content": "<table><tr><td>Twitter data by Founta et al. (2018): A large</td></tr><tr><td>part of this dataset is collected using random sam-</td></tr><tr><td>pling. Since hate and abusive speech occurs in a</td></tr><tr><td>very small percentage, the authors chose to perform</td></tr><tr><td>boosted sampling. The dataset has four classes,</td></tr><tr><td>namely 'normal','spam','abusive', and 'hateful'.</td></tr><tr><td>We exclude the samples from the 'spam' class. This</td></tr><tr><td>brings down the total number of samples in the</td></tr><tr><td>dataset from 100K to 86.9K.</td></tr></table>",
"html": null,
"text": "Dataset statistics for Founta et al. (2018) andDavidson et al. (2017)"
},
"TABREF2": {
"type_str": "table",
"num": null,
"content": "<table/>",
"html": null,
"text": "Average inter-class and intra-class euclidean distance across different representations for the train set of Founta et al. (2018) dataset."
}
}
}
}