{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T06:07:05.399135Z" }, "title": "Explainable Detection of Sarcasm in Social Media", "authors": [ { "first": "Ramya", "middle": [], "last": "Akula", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Central Florida", "location": { "country": "USA" } }, "email": "ramya.akula@knights.ucf.edu" }, { "first": "Ivan", "middle": [], "last": "Garibay", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Central Florida", "location": { "country": "USA" } }, "email": "igaribay@ucf.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Sarcasm is a linguistic expression often used to communicate the opposite of what is said, usually something that is very unpleasant with an intention to insult or ridicule. Inherent ambiguity in sarcastic expressions makes sarcasm detection very difficult. In this work, we focus on detecting sarcasm in textual conversations, written in English, from various social networking platforms and online media. To this end, we develop an interpretable deep learning model using multi-head self-attention and gated recurrent units. We show the effectiveness and interpretability of our approach by achieving state-of-the-art results on datasets from social networking platforms, online discussion forum and political dialogues.", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "Sarcasm is a linguistic expression often used to communicate the opposite of what is said, usually something that is very unpleasant with an intention to insult or ridicule. Inherent ambiguity in sarcastic expressions makes sarcasm detection very difficult. In this work, we focus on detecting sarcasm in textual conversations, written in English, from various social networking platforms and online media. To this end, we develop an interpretable deep learning model using multi-head self-attention and gated recurrent units. We show the effectiveness and interpretability of our approach by achieving state-of-the-art results on datasets from social networking platforms, online discussion forum and political dialogues.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Sarcasm is a rhetorical way of expressing dislike or negative emotions using different language constructs, such as exaggeration or ridicule. It is an assortment of mockery and false politeness to intensify hostility without explicitly doing so. In face-to-face conversation, facial expressions, gestures, and tone of the speaker provide cues that help in identifying sarcasm. However, recognizing sarcasm in textual communication is not a trivial task as none of these cues are readily available. With the explosion of internet usage, sarcasm detection in online communications from social networking platforms, discussion forums, and e-commerce websites has become crucial for opinion mining, sentiment analysis, and identifying cyberbullies, online trolls. Thus, developing computational models for automatic detection of sarcasm gathered pace in recent times with multiple studies and collection of new datasets (Ghosh and Veale, 2017; Misra and Arora, 2019; Khodak et al., 2018) .", "cite_spans": [ { "start": 916, "end": 939, "text": "(Ghosh and Veale, 2017;", "ref_id": "BIBREF3" }, { "start": 940, "end": 962, "text": "Misra and Arora, 2019;", "ref_id": "BIBREF11" }, { "start": 963, "end": 983, "text": "Khodak et al., 2018)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Earlier works on sarcasm detection on texts use lexical (content) and pragmatic (context) cues (Kreuz and Caucci, 2007) such as interjections, punctuation, and sentimental shifts, which are major indicators of sarcasm (Joshi et al., 2015) . In these works, the features are hand-crafted which cannot generalize in the presence of informal language and figurative slang widely used in online conversations. With the advent of deeplearning, recent works (Ghosh and Veale, 2017; Ilic et al., 2018; Ghosh et al., 2018; Xiong et al., 2019; Liu et al., 2019) , leverage neural networks to learn both lexical and contextual features, eliminating the need for hand-crafted features. In these works, word embeddings are incorporated to train deep convolutional, recurrent, or attention-based neural networks to achieve state-of-the-art results. While deep learning-based approaches achieve impressive performance, they lack interpretability. In this work, we also focus on the interpretability of the model along with its high performance.", "cite_spans": [ { "start": 95, "end": 119, "text": "(Kreuz and Caucci, 2007)", "ref_id": "BIBREF9" }, { "start": 218, "end": 238, "text": "(Joshi et al., 2015)", "ref_id": "BIBREF7" }, { "start": 452, "end": 475, "text": "(Ghosh and Veale, 2017;", "ref_id": "BIBREF3" }, { "start": 476, "end": 494, "text": "Ilic et al., 2018;", "ref_id": "BIBREF6" }, { "start": 495, "end": 514, "text": "Ghosh et al., 2018;", "ref_id": "BIBREF4" }, { "start": 515, "end": 534, "text": "Xiong et al., 2019;", "ref_id": "BIBREF20" }, { "start": 535, "end": 552, "text": "Liu et al., 2019)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The main contributions of our work are: a) Propose an interpretable model for sarcasm detection using selfattention. b) Achieve state-of-the-art results on diverse datasets and exhibit the effectiveness of our model with extensive experimentation and ablation studies. c) Exhibit the interpretability of our model by analyzing the learned attention maps.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our proposed approach consists of five components: Data Pre-processing, Multi-Head Self-Attention, Gated Recurrent Units(GRU), Classification, and Model Interpretability. The architecture of our sarcasm detection model is shown in Figure 1 . Data pre-processing involves converting input text to word embeddings, required for training a deep learning model. We employ the pre-trained language model, BERT (Devlin et al., 2019) , to extract word embeddings. We use these word embeddings which capture global context as we believe context is essential for detecting sarcasm. These embeddings form the input to our multi-head self-attention module which identifies words in the input text that provide crucial cues for sarcasm. In the next step, the GRU layer aids in learning long-distance relationships among these highlighted words and output a single feature vector encoding the entire sequence. Finally, a fully-connected layer with sigmoid activation is used to get the final classification score.", "cite_spans": [ { "start": 405, "end": 426, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF1" } ], "ref_spans": [ { "start": 231, "end": 239, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Proposed Approach", "sec_num": "2" }, { "text": "Multi-Head Self-Attention Given a sentence S, we apply a standard tokenizer and use pre-trained models to obtain D dimensional embeddings for individual words in the sentence. These embeddings S = {e 1 , e 2 , ..., e N }, S \u2208 R N\u00d7D from the input to our model. To detect sarcasm in sentence S, it is crucial to identify specific words that provide essential cues such as sarcastic connotations and negative emotions. The importance of these cue-words is dependent on multiple factors based on different contexts. In our proposed model we leverage multi-head self-attention to identify these cue-words from the input text. Attention is a mechanism to discover patterns in the input that are crucial for solving the given task. In deep learning, self-attention (Vaswani et al., 2017) is an attention mechanism for sequences, which helps in learning the task-specific relationship between different elements of a given sequence to produce a better sequence representation. In the self-attention module, three linear projections: Key (K), Value (V ), and Query (Q) of the given input sequence are generated, where K, Q,V \u2208 R N\u00d7D . Attention-map is computed based on the similarity between K, Q, and the output of this module A \u2208 R N\u00d7D is the scaled dot-product between V and the learned softmax attention (QK T ). In multihead self-attention, multiple copies of the self-attention module are used in parallel. Each head captures different relationships between the words in the input text and identify those keywords that aid in classification. In our model, we use a series of multi-head self-attention layers (#L) with multiple heads (#H) in each layer.", "cite_spans": [ { "start": 759, "end": 781, "text": "(Vaswani et al., 2017)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Proposed Approach", "sec_num": "2" }, { "text": "Gated Recurrent Units Self-attention finds the words in the text which are important in detecting sarcasm. These words can be close to each other or farther apart in the input text. To learn long-distance relationships between these words, we use GRUs. These units are an improvement over standard recurrent neural networks and are designed to dynamically remember and forget the information flow using Reset (r t ) and Update (z t ) gates to solve the vanishing gradient problem.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proposed Approach", "sec_num": "2" }, { "text": "Classification A single fully-connected feed-forward layer is used with sigmoid activation to compute the final output. Input to this layer is the feature vector h N from the GRU module and the output is a probability score y \u2208 [0, 1], where\u0177 \u2208 {0, 1} is the binary label i.e., 1:Sarcasm and 0:No-sarcasm.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proposed Approach", "sec_num": "2" }, { "text": "Model Interpretability Developing models that can explain their predictions is crucial to building trust and faith in deep learning while enabling a wide range of applications with machine intelligence at its backbone. Existing deep learning network architectures such as convolutional and recurrent neural networks are not inherently interpretable and require additional visualization techniques (Zhou et al., 2016; Selvaraju et al., 2017) . To avoid this, we in this work employ self-attention which is inherently interpretable and allows identifying elements in the input which are crucial for a given task.", "cite_spans": [ { "start": 397, "end": 416, "text": "(Zhou et al., 2016;", "ref_id": "BIBREF22" }, { "start": 417, "end": 440, "text": "Selvaraju et al., 2017)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Proposed Approach", "sec_num": "2" }, { "text": "We implement our model in PyTorch (Paszke et al., 2019) , a deep-learning framework in Python. To tokenize and extract word embeddings for the input text, we use publicly available resources (Wolf et al., 2019) . Specifically, we use tokenizer and pre-trained weights from the \"bert-base-uncased\" model to convert words to tokens and then convert tokens to word embeddings. The embeddings for the words in the input text are passed through a series of multi-head self-attention layers #L , with multiple heads #H in each of the layers. The output from the self-attention layer is passed through a single bi-directional GRU layer with it's hidden dimension d = 512. The 512-dimensional output feature vector from the GRU layer is passed through the fully connected layer to get a 1-dimensional output. A sigmoid activation is applied to the final output and BCE loss is used to compute the loss between the ground truth and the predicted probability score. We use Adam optimizer to train our model with approximately 13 million parameters, using a learning rate of 1e-4, batch size of 64, and dropout set 0.2. We use one NVIDIA Pascal Titan-X with 16GB memory for all our experiments. We set #H = 8 and #L = 3 in all our experiments for all the datasets. Details of these datasets, including the sample counts in train/test splits and the data source, are presented in Table 1 .", "cite_spans": [ { "start": 34, "end": 55, "text": "(Paszke et al., 2019)", "ref_id": "BIBREF14" }, { "start": 191, "end": 210, "text": "(Wolf et al., 2019)", "ref_id": "BIBREF19" } ], "ref_spans": [ { "start": 1368, "end": 1375, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Experiments", "sec_num": "3" }, { "text": "Evaluation We pose Sarcasm Detection as a classification problem, and use Precision, Recall, F1-Score,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "3" }, { "text": "Train Test Total Twitter, 2013 1,368 588 1,956 Dialogues, 2016 3754 938 4,692 Reddit, 2018 154,702 64,666 219,368 (Riloff et al., 2013) , Dialogues, 2016 (Oraby et al., 2016) , and Reddit, 2018 (Khodak et al., 2018) . These are sourced from varied online platforms including social networks and discussion forums. and Accuracy as evaluation metrics to test the performance of the trained models. Apart from these standard metrics we also compute Area Under the ROC Curve (AUC score) which is threshold independent.", "cite_spans": [ { "start": 124, "end": 145, "text": "(Riloff et al., 2013)", "ref_id": "BIBREF15" }, { "start": 164, "end": 184, "text": "(Oraby et al., 2016)", "ref_id": "BIBREF12" }, { "start": 204, "end": 225, "text": "(Khodak et al., 2018)", "ref_id": "BIBREF8" } ], "ref_spans": [ { "start": 6, "end": 100, "text": "Test Total Twitter, 2013 1,368 588 1,956 Dialogues, 2016 3754 938 4,692 Reddit, 2018", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Source", "sec_num": null }, { "text": "We present the results of our experiments on multiple publicly available datasets in this section. Results on the Twitter dataset are presented in Table 2 . In Table 4 , we present the results on the Reddit SARC 2.0 dataset which is divided into two subsets: Main and Political. In both datasets, our proposed approach outperforms previous methods. To compare our approach with Hazarika et al. 2018, we trained our models with and without the personality features and we show improvement in both the settings. Similar to Hazarika et al. 2018, we use the personality features extracted from a CNN model trained on a multi-label personality detection task using all the comments from a user. These features are appended to the features from the input text before passing them to the final classification layer in the model. Apart from Twitter and Reddit data, we also experimented with data from one other data source, i.e., Political Dialogues. In Table 3 , we present results on the corresponding Sarcasm Corpus V2 Dialogues dataset (Oraby et al., 2016) . We use this dataset (Oraby et al., 2016) for the following ablation studies.", "cite_spans": [ { "start": 1033, "end": 1053, "text": "(Oraby et al., 2016)", "ref_id": "BIBREF12" }, { "start": 1076, "end": 1096, "text": "(Oraby et al., 2016)", "ref_id": "BIBREF12" } ], "ref_spans": [ { "start": 147, "end": 154, "text": "Table 2", "ref_id": null }, { "start": 160, "end": 167, "text": "Table 4", "ref_id": "TABREF2" }, { "start": 947, "end": 954, "text": "Table 3", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Results", "sec_num": "4" }, { "text": "Ablation 1: We vary the number of self-attention layers and fix the number of heads per layer (#H = 8). From the results of this experiment presented in Table 5 , we observe that as the number of self-attention layers increase (#L = 0, 1, 3, 5) the improvement in the performance of the model due to the additional layers saturate. Also, these results show that the proposed multi-head self-attention model achieves a 2% improvement over the baseline model where only a single GRU layer is used without any self-attention layers.", "cite_spans": [], "ref_spans": [ { "start": 153, "end": 161, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "Results", "sec_num": "4" }, { "text": "Ablation 2: We vary the number of heads per layer with a fixed number of self-attention layers (#L = 3). The results of these experiments are presented Table 6 . We observe that the performance of the model also increases with the increase in the number of heads per self-attention layer.", "cite_spans": [], "ref_spans": [ { "start": 152, "end": 160, "text": "Table 6", "ref_id": null } ], "eq_spans": [], "section": "Results", "sec_num": "4" }, { "text": "Attention maps from the individual heads of the selfattention layers provide the learned attention weights for each time-step in the input. In our case, each time-step is a word and we visualize the per-word attention weights for sample sentences with and without sarcasm from the SARC 2.0 Main dataset. The model we used for this analysis has 5 attention layers with 8 heads per attention. Figures 2 shows attention analysis (Clark et al., 2019) for sample sentences with and without sarcasm respectively. Each column in these figures corresponds to a single attention layer and attention weights between words in each head are represented using colored edges. The darkness of an edge indicates the strength of the attention weight. CLS and SEP are classifications and separator tokens from BERT.", "cite_spans": [ { "start": 426, "end": 446, "text": "(Clark et al., 2019)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Model Interpretability", "sec_num": "5" }, { "text": "Attention Analysis For a sentence with sarcasm, Figure 2 shows that certain words receive more attention than others. For instance, words such as \"just\", \"again\", \"totally\", \"!\", have darker edges connecting them with every other word in a sentence. These are the words in the sentence which hint at sarcasm and as expected these receive higher attention than others. Also, note that each cue word is attended by a different head in the first three layers of self-attention. In the final two layers, we observe that the attention is spread out to every word in the sentence indicating redundancy of these layers in the model. Attention weight for a word is computed by first considering the maximum attention it receives across layers and then averaging the weights across multiple-heads in the layer. Finally, the weights for a word are averaged over all the words in the sentence. The stronger the highlight for a word, the higher is the attention weight placed on it by the model while classifying the sentence. Words from the sarcastic sentences with higher weights show that the model can detect sarcastic cues from the sentence. For example, the words \"totally\", \"first\", \"ever\" from the first sentence and \"even\", \"until\", \"already\" from the third sentence. These are the words that exhibit sarcasm in the sentences, which the model can successfully identify. In all the samples which are classified as nonsarcasm, the weights for the individual words are very low in comparison to cue-words from the sarcastic sentences. Our model can predict a high score for sarcastic sentences and low scores for non-sarcastic sentences.", "cite_spans": [], "ref_spans": [ { "start": 48, "end": 56, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Model Interpretability", "sec_num": "5" }, { "text": "In this work, we propose a novel multi-head selfattention-based neural network architecture to detect sarcasm in a given sentence. Our proposed approach has 5 components: data pre-processing, multi-head selfattention module, gated recurrent unit module, classification, and model interpretability. Multi-head selfattention is used to highlight the parts of the sentence which provide crucial cues for sarcasm detection. GRUs aid in learning long-distance relationships among these highlighted words in the sentence. The output from this layer is passed through a fully-connected classification layer to get the final classification score. Exper-", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "Precision Recall F1 AUC Fracking Sarcasm (Ghosh and Veale, 2016) 88.3 87.9 88.1 -GRNN (Zhang et al., 2016) 66.3 64.7 65.4 -ELMo-BiLSTM (Ilic et al., 2018) 75.9 75.0 75.9 -ELMo-BiLSTM FULL (Ilic et al., 2018) 77.8 73.5 75.3 -ELMo-BiLSTM AUG (Ilic et al., 2018) 68.4 70.8 69.4 -A2Text-Net (Liu et al., 2019) 91.7 91.0 90.0 97.0", "cite_spans": [ { "start": 41, "end": 64, "text": "(Ghosh and Veale, 2016)", "ref_id": "BIBREF2" }, { "start": 86, "end": 106, "text": "(Zhang et al., 2016)", "ref_id": "BIBREF21" }, { "start": 135, "end": 154, "text": "(Ilic et al., 2018)", "ref_id": "BIBREF6" }, { "start": 188, "end": 207, "text": "(Ilic et al., 2018)", "ref_id": "BIBREF6" }, { "start": 240, "end": 259, "text": "(Ilic et al., 2018)", "ref_id": "BIBREF6" }, { "start": 287, "end": 305, "text": "(Liu et al., 2019)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Models", "sec_num": null }, { "text": "Our Model 97.9 99.6 98.7 99.6 (+ 6.2 \u2191) (+ 8.6 \u2191) (+ 8.7 \u2191) (+ 2.6 \u2191) Table 2 : Results on Twitter dataset (Riloff et al., 2013) .", "cite_spans": [ { "start": 107, "end": 128, "text": "(Riloff et al., 2013)", "ref_id": "BIBREF15" } ], "ref_spans": [ { "start": 70, "end": 77, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Models", "sec_num": null }, { "text": "Precision Recall F1 AUC GRNN (Zhang et al., 2016) 62.2 61.8 61.2 -CNN-LSTM-DNN (Ghosh and Veale, 2016) 66.1 66.7 65.7 -SIARN (Tay et al., 2018) 72.1 71.8 71.8 -MIARN (Tay et al., 2018) 72.9 72.9 72.7 -ELMo-BiLSTM (Ilic et al., 2018) 74.8 74.7 74.7 -ELMo-BiLSTM FULL (Ilic et al., 2018) 76.0 76.0 76.0 -Our Model 77.4 77.2 77.2 0.834 ( + 1.2 \u2191) ( + 1.4 \u2191) ( + 1.2 \u2191) (Hazarika et al., 2018) 77.0 77.0 74.0 75.0 SARC 2.0 (Khodak et al., 2018) 75.0 -76.0 -ELMo-BiLSTM (Ilic et al., 2018) 72.0 -78.0 -ELMo-BiLSTM FULL (Ilic et al., 2018) 76.0 76.0 72.0 72.0 (Khodak et al., 2018) . Figure 2 : Attention analysis with sample sentence with sarcasm. Words providing cues for sarcasm, highlighted in green, are the words with higher attention weights. The prediction score for this sentence by our model is 0.94. iments are conducted on two datasets from different data sources and show significant improvement over the state-of-the-art models by all evaluation metrics. Results from ablation studies and analysis of the trained model are presented to show the importance of different components of our model. We analyze the learned attention weights to interpret our trained model and show that it can indeed identify words in the input text which provide cues for sarcasm. Table 6 : Ablation study with varying number of Heads #H and fixed Layers #L = 3 on the Sarcasm Corpus V2 Dialogues dataset (Oraby et al., 2016) .", "cite_spans": [ { "start": 29, "end": 49, "text": "(Zhang et al., 2016)", "ref_id": "BIBREF21" }, { "start": 79, "end": 102, "text": "(Ghosh and Veale, 2016)", "ref_id": "BIBREF2" }, { "start": 125, "end": 143, "text": "(Tay et al., 2018)", "ref_id": "BIBREF17" }, { "start": 166, "end": 184, "text": "(Tay et al., 2018)", "ref_id": "BIBREF17" }, { "start": 213, "end": 232, "text": "(Ilic et al., 2018)", "ref_id": "BIBREF6" }, { "start": 266, "end": 285, "text": "(Ilic et al., 2018)", "ref_id": "BIBREF6" }, { "start": 366, "end": 389, "text": "(Hazarika et al., 2018)", "ref_id": "BIBREF5" }, { "start": 419, "end": 440, "text": "(Khodak et al., 2018)", "ref_id": "BIBREF8" }, { "start": 465, "end": 484, "text": "(Ilic et al., 2018)", "ref_id": "BIBREF6" }, { "start": 514, "end": 533, "text": "(Ilic et al., 2018)", "ref_id": "BIBREF6" }, { "start": 554, "end": 575, "text": "(Khodak et al., 2018)", "ref_id": "BIBREF8" }, { "start": 1391, "end": 1411, "text": "(Oraby et al., 2016)", "ref_id": "BIBREF12" } ], "ref_spans": [ { "start": 578, "end": 586, "text": "Figure 2", "ref_id": null }, { "start": 1267, "end": 1274, "text": "Table 6", "ref_id": null } ], "eq_spans": [], "section": "Models", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "What does bert look at? an analysis of bert's attention", "authors": [ { "first": "Kevin", "middle": [], "last": "Clark", "suffix": "" }, { "first": "Urvashi", "middle": [], "last": "Khandelwal", "suffix": "" }, { "first": "Omer", "middle": [], "last": "Levy", "suffix": "" }, { "first": "Christopher D", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP", "volume": "", "issue": "", "pages": "276--286", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kevin Clark, Urvashi Khandelwal, Omer Levy, and Christopher D Manning. 2019. What does bert look at? an analysis of bert's attention. In Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 276-286.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Bert: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of NAACL: Human Language Technologies", "volume": "", "issue": "", "pages": "4171--4186", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of NAACL: Human Language Technologies, pages 4171-4186.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Fracking sarcasm using neural network", "authors": [ { "first": "Aniruddha", "middle": [], "last": "Ghosh", "suffix": "" }, { "first": "Tony", "middle": [], "last": "Veale", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 7th workshop on computational approaches to subjectivity, sentiment and social media analysis", "volume": "", "issue": "", "pages": "161--169", "other_ids": {}, "num": null, "urls": [], "raw_text": "Aniruddha Ghosh and Tony Veale. 2016. Fracking sar- casm using neural network. In Proceedings of the 7th workshop on computational approaches to sub- jectivity, sentiment and social media analysis, pages 161-169.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Magnets for sarcasm: making sarcasm detection timely, contextual and very personal", "authors": [ { "first": "Aniruddha", "middle": [], "last": "Ghosh", "suffix": "" }, { "first": "Tony", "middle": [], "last": "Veale", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 2017 Conference on EMNLP", "volume": "", "issue": "", "pages": "482--491", "other_ids": {}, "num": null, "urls": [], "raw_text": "Aniruddha Ghosh and Tony Veale. 2017. Magnets for sarcasm: making sarcasm detection timely, contex- tual and very personal. In Proceedings of the 2017 Conference on EMNLP, pages 482-491.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Sarcasm analysis using conversation context", "authors": [ { "first": "Debanjan", "middle": [], "last": "Ghosh", "suffix": "" }, { "first": "Smaranda", "middle": [], "last": "Alexander R Fabbri", "suffix": "" }, { "first": "", "middle": [], "last": "Muresan", "suffix": "" } ], "year": 2018, "venue": "Computational Linguistics", "volume": "", "issue": "", "pages": "755--792", "other_ids": {}, "num": null, "urls": [], "raw_text": "Debanjan Ghosh, Alexander R Fabbri, and Smaranda Muresan. 2018. Sarcasm analysis using conversation context. Computational Linguistics, pages 755-792.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Cascade: Contextual sarcasm detection in online discussion forums", "authors": [ { "first": "Devamanyu", "middle": [], "last": "Hazarika", "suffix": "" }, { "first": "Soujanya", "middle": [], "last": "Poria", "suffix": "" }, { "first": "Sruthi", "middle": [], "last": "Gorantla", "suffix": "" }, { "first": "Erik", "middle": [], "last": "Cambria", "suffix": "" }, { "first": "Roger", "middle": [], "last": "Zimmermann", "suffix": "" }, { "first": "Rada", "middle": [], "last": "Mihalcea", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 27th International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "1837--1848", "other_ids": {}, "num": null, "urls": [], "raw_text": "Devamanyu Hazarika, Soujanya Poria, Sruthi Gorantla, Erik Cambria, Roger Zimmermann, and Rada Mihal- cea. 2018. Cascade: Contextual sarcasm detection in online discussion forums. In Proceedings of the 27th International Conference on Computational Linguis- tics, pages 1837-1848.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Deep contextualized word representations for detecting sarcasm and irony", "authors": [ { "first": "Suzana", "middle": [], "last": "Ilic", "suffix": "" }, { "first": "Edison", "middle": [], "last": "Marrese-Taylor", "suffix": "" }, { "first": "Jorge", "middle": [], "last": "Balazs", "suffix": "" }, { "first": "Yutaka", "middle": [], "last": "Matsuo", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 9th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis", "volume": "", "issue": "", "pages": "2--7", "other_ids": {}, "num": null, "urls": [], "raw_text": "Suzana Ilic, Edison Marrese-Taylor, Jorge Balazs, and Yutaka Matsuo. 2018. Deep contextualized word representations for detecting sarcasm and irony. In Proceedings of the 9th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Me- dia Analysis, pages 2-7.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Harnessing context incongruity for sarcasm detection", "authors": [ { "first": "Aditya", "middle": [], "last": "Joshi", "suffix": "" }, { "first": "Vinita", "middle": [], "last": "Sharma", "suffix": "" }, { "first": "Pushpak", "middle": [], "last": "Bhattacharyya", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 53rd Annual Meeting of the ACL and the 7th IJCNLP", "volume": "", "issue": "", "pages": "757--762", "other_ids": {}, "num": null, "urls": [], "raw_text": "Aditya Joshi, Vinita Sharma, and Pushpak Bhat- tacharyya. 2015. Harnessing context incongruity for sarcasm detection. In Proceedings of the 53rd An- nual Meeting of the ACL and the 7th IJCNLP, pages 757-762.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "A large self-annotated corpus for sarcasm", "authors": [ { "first": "Mikhail", "middle": [], "last": "Khodak", "suffix": "" }, { "first": "Nikunj", "middle": [], "last": "Saunshi", "suffix": "" }, { "first": "Kiran", "middle": [], "last": "Vodrahalli", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the Eleventh International Conference on Language Resources and Evaluation", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mikhail Khodak, Nikunj Saunshi, and Kiran Vodrahalli. 2018. A large self-annotated corpus for sarcasm. In Proceedings of the Eleventh International Confer- ence on Language Resources and Evaluation (LREC 2018).", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Lexical influences on the perception of sarcasm", "authors": [ { "first": "J", "middle": [], "last": "Roger", "suffix": "" }, { "first": "Gina", "middle": [ "M" ], "last": "Kreuz", "suffix": "" }, { "first": "", "middle": [], "last": "Caucci", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the Workshop on computational approaches to Figurative Language", "volume": "", "issue": "", "pages": "1--4", "other_ids": {}, "num": null, "urls": [], "raw_text": "Roger J Kreuz and Gina M Caucci. 2007. Lexical in- fluences on the perception of sarcasm. In Proceed- ings of the Workshop on computational approaches to Figurative Language, pages 1-4. Association for Computational Linguistics.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "A2text-net: A novel deep neural network for sarcasm detection", "authors": [ { "first": "Liyuan", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Jennifer", "middle": [ "Lewis" ], "last": "Priestley", "suffix": "" }, { "first": "Yiyun", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "E", "middle": [], "last": "Herman", "suffix": "" }, { "first": "Meng", "middle": [], "last": "Ray", "suffix": "" }, { "first": "", "middle": [], "last": "Han", "suffix": "" } ], "year": 2019, "venue": "2019 IEEE First International Conference on Cognitive Machine Intelligence (CogMI)", "volume": "", "issue": "", "pages": "118--126", "other_ids": {}, "num": null, "urls": [], "raw_text": "Liyuan Liu, Jennifer Lewis Priestley, Yiyun Zhou, Her- man E Ray, and Meng Han. 2019. A2text-net: A novel deep neural network for sarcasm detection. In 2019 IEEE First International Conference on Cogni- tive Machine Intelligence (CogMI), pages 118-126. IEEE.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Sarcasm detection using hybrid neural network", "authors": [ { "first": "Rishabh", "middle": [], "last": "Misra", "suffix": "" }, { "first": "Prahal", "middle": [], "last": "Arora", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1908.07414" ] }, "num": null, "urls": [], "raw_text": "Rishabh Misra and Prahal Arora. 2019. Sarcasm de- tection using hybrid neural network. arXiv preprint arXiv:1908.07414.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Creating and characterizing a diverse corpus of sarcasm in dialogue", "authors": [ { "first": "Shereen", "middle": [], "last": "Oraby", "suffix": "" }, { "first": "Vrindavan", "middle": [], "last": "Harrison", "suffix": "" }, { "first": "Lena", "middle": [], "last": "Reed", "suffix": "" }, { "first": "Ernesto", "middle": [], "last": "Hernandez", "suffix": "" }, { "first": "Ellen", "middle": [], "last": "Riloff", "suffix": "" }, { "first": "Marilyn", "middle": [], "last": "Walker", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 17th", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shereen Oraby, Vrindavan Harrison, Lena Reed, Ernesto Hernandez, Ellen Riloff, and Marilyn Walker. 2016. Creating and characterizing a diverse corpus of sarcasm in dialogue. In Proceedings of the 17th", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Annual Meeting of the Special Interest Group on Discourse and Dialogue", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "31--41", "other_ids": {}, "num": null, "urls": [], "raw_text": "Annual Meeting of the Special Interest Group on Dis- course and Dialogue, pages 31-41.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Pytorch: An imperative style, high-performance deep learning library", "authors": [ { "first": "Adam", "middle": [], "last": "Paszke", "suffix": "" }, { "first": "Sam", "middle": [], "last": "Gross", "suffix": "" }, { "first": "Francisco", "middle": [], "last": "Massa", "suffix": "" }, { "first": "Adam", "middle": [], "last": "Lerer", "suffix": "" }, { "first": "James", "middle": [], "last": "Bradbury", "suffix": "" }, { "first": "Gregory", "middle": [], "last": "Chanan", "suffix": "" }, { "first": "Trevor", "middle": [], "last": "Killeen", "suffix": "" }, { "first": "Zeming", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Natalia", "middle": [], "last": "Gimelshein", "suffix": "" }, { "first": "Luca", "middle": [], "last": "Antiga", "suffix": "" }, { "first": "Alban", "middle": [], "last": "Desmaison", "suffix": "" }, { "first": "Andreas", "middle": [], "last": "Kopf", "suffix": "" }, { "first": "Edward", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Zachary", "middle": [], "last": "Devito", "suffix": "" } ], "year": 2019, "venue": "Advances in Neural Information Processing Systems", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Te- jani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. 2019. Pytorch: An imperative style, high-performance deep learning li- brary. In Advances in Neural Information Processing Systems 32.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Sarcasm as contrast between a positive sentiment and negative situation", "authors": [ { "first": "Ellen", "middle": [], "last": "Riloff", "suffix": "" }, { "first": "Ashequl", "middle": [], "last": "Qadir", "suffix": "" }, { "first": "Prafulla", "middle": [], "last": "Surve", "suffix": "" }, { "first": "Lalindra De", "middle": [], "last": "Silva", "suffix": "" }, { "first": "Nathan", "middle": [], "last": "Gilbert", "suffix": "" }, { "first": "Ruihong", "middle": [], "last": "Huang", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 2013 Conference on EMNLP", "volume": "", "issue": "", "pages": "704--714", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ellen Riloff, Ashequl Qadir, Prafulla Surve, Lalindra De Silva, Nathan Gilbert, and Ruihong Huang. 2013. Sarcasm as contrast between a positive sentiment and negative situation. In Proceedings of the 2013 Conference on EMNLP, pages 704-714.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Grad-cam: Visual explanations from deep networks via gradient-based localization", "authors": [ { "first": "R", "middle": [], "last": "Ramprasaath", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Selvaraju", "suffix": "" }, { "first": "Abhishek", "middle": [], "last": "Cogswell", "suffix": "" }, { "first": "Ramakrishna", "middle": [], "last": "Das", "suffix": "" }, { "first": "Devi", "middle": [], "last": "Vedantam", "suffix": "" }, { "first": "Dhruv", "middle": [], "last": "Parikh", "suffix": "" }, { "first": "", "middle": [], "last": "Batra", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the IEEE international conference on computer vision", "volume": "", "issue": "", "pages": "618--626", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ramprasaath R Selvaraju, Michael Cogswell, Abhishek Das, Ramakrishna Vedantam, Devi Parikh, and Dhruv Batra. 2017. Grad-cam: Visual explanations from deep networks via gradient-based localization. In Proceedings of the IEEE international conference on computer vision, pages 618-626.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Reasoning with sarcasm by reading in-between", "authors": [ { "first": "Yi", "middle": [], "last": "Tay", "suffix": "" }, { "first": "Anh", "middle": [ "Tuan" ], "last": "Luu", "suffix": "" }, { "first": "Siu", "middle": [ "Cheung" ], "last": "Hui", "suffix": "" }, { "first": "Jian", "middle": [], "last": "Su", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 56th Annual Meeting of the ACL", "volume": "", "issue": "", "pages": "1010--1020", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yi Tay, Anh Tuan Luu, Siu Cheung Hui, and Jian Su. 2018. Reasoning with sarcasm by reading in-between. In Proceedings of the 56th Annual Meeting of the ACL, pages 1010-1020.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Attention is all you need", "authors": [ { "first": "Ashish", "middle": [], "last": "Vaswani", "suffix": "" }, { "first": "Noam", "middle": [], "last": "Shazeer", "suffix": "" }, { "first": "Niki", "middle": [], "last": "Parmar", "suffix": "" }, { "first": "Jakob", "middle": [], "last": "Uszkoreit", "suffix": "" }, { "first": "Llion", "middle": [], "last": "Jones", "suffix": "" }, { "first": "Aidan", "middle": [ "N" ], "last": "Gomez", "suffix": "" }, { "first": "\u0141ukasz", "middle": [], "last": "Kaiser", "suffix": "" }, { "first": "Illia", "middle": [], "last": "Polosukhin", "suffix": "" } ], "year": 2017, "venue": "Advances in neural information processing systems", "volume": "", "issue": "", "pages": "5998--6008", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information process- ing systems, pages 5998-6008.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Huggingface's transformers: State-of-the-art natural language processing", "authors": [ { "first": "Thomas", "middle": [], "last": "Wolf", "suffix": "" }, { "first": "Lysandre", "middle": [], "last": "Debut", "suffix": "" }, { "first": "Victor", "middle": [], "last": "Sanh", "suffix": "" }, { "first": "Julien", "middle": [], "last": "Chaumond", "suffix": "" }, { "first": "Clement", "middle": [], "last": "Delangue", "suffix": "" }, { "first": "Anthony", "middle": [], "last": "Moi", "suffix": "" }, { "first": "Pierric", "middle": [], "last": "Cistac", "suffix": "" }, { "first": "Tim", "middle": [], "last": "Rault", "suffix": "" }, { "first": "R'emi", "middle": [], "last": "Louf", "suffix": "" }, { "first": "Morgan", "middle": [], "last": "Funtowicz", "suffix": "" }, { "first": "Jamie", "middle": [], "last": "Brew", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, R'emi Louf, Morgan Funtowicz, and Jamie Brew. 2019. Huggingface's transformers: State-of-the-art natural language processing. ArXiv.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Sarcasm detection with self-matching networks and low-rank bilinear pooling", "authors": [ { "first": "Tao", "middle": [], "last": "Xiong", "suffix": "" }, { "first": "Peiran", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Hongbo", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Yihui", "middle": [], "last": "Yang", "suffix": "" } ], "year": 2019, "venue": "The World Wide Web Conference", "volume": "", "issue": "", "pages": "2115--2124", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tao Xiong, Peiran Zhang, Hongbo Zhu, and Yihui Yang. 2019. Sarcasm detection with self-matching net- works and low-rank bilinear pooling. In The World Wide Web Conference, pages 2115-2124.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Tweet sarcasm detection using deep neural network", "authors": [ { "first": "Meishan", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Yue", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Guohong", "middle": [], "last": "Fu", "suffix": "" } ], "year": 2016, "venue": "Proceedings of COLING 2016, The 26th International Conference on Computational Linguistics: Technical Papers", "volume": "", "issue": "", "pages": "2449--2460", "other_ids": {}, "num": null, "urls": [], "raw_text": "Meishan Zhang, Yue Zhang, and Guohong Fu. 2016. Tweet sarcasm detection using deep neural network. In Proceedings of COLING 2016, The 26th Inter- national Conference on Computational Linguistics: Technical Papers, pages 2449-2460.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Learning deep features for discriminative localization", "authors": [ { "first": "Bolei", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Aditya", "middle": [], "last": "Khosla", "suffix": "" }, { "first": "Agata", "middle": [], "last": "Lapedriza", "suffix": "" }, { "first": "Aude", "middle": [], "last": "Oliva", "suffix": "" }, { "first": "Antonio", "middle": [], "last": "Torralba", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the IEEE conference on computer vision and pattern recognition", "volume": "", "issue": "", "pages": "2921--2929", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bolei Zhou, Aditya Khosla, Agata Lapedriza, Aude Oliva, and Antonio Torralba. 2016. Learning deep features for discriminative localization. In Proceed- ings of the IEEE conference on computer vision and pattern recognition, pages 2921-2929.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "uris": null, "text": "Multi head self-attention architecture for sarcasm detection. Pre-trained word embeddings are extracted for input text and are enhanced by an attention module with L self-attention layers and H heads per layer. Resultant features are passed through a Gated Recurrent Unit and a Feed-forward layer for classification.", "type_str": "figure" }, "FIGREF1": { "num": null, "uris": null, "text": "0 \u2191) ( + 4.0 \u2191) ( + 3.0 \u2191) ( + 2.0 \u2191)", "type_str": "figure" }, "TABREF0": { "html": null, "text": "Statistics of datasets used in our experiments.", "type_str": "table", "content": "", "num": null }, "TABREF1": { "html": null, "text": "Results on Sarcasm Corpus V2 Dialogues dataset(Oraby et al., 2016)", "type_str": "table", "content": "
ModelsMain AccuracyF1Political AccuracyF1
CASCADE
", "num": null }, "TABREF2": { "html": null, "text": "Results on Reddit dataset SARC 2.0 and SARC 2.0 Political", "type_str": "table", "content": "", "num": null }, "TABREF3": { "html": null, "text": "Ablation study with varying number of attention layers #L and fixed Heads #H = 8 on the Sarcasm Corpus V2 Dialogues dataset(Oraby et al., 2016).", "type_str": "table", "content": "
#L -LayersPrecision RecallF1
0 (GRU only)75.675.675.6
1 Layer76.276.176.1
3 Layers77.477.277.2
5 Layers77.677.677.6
Table 5: #H -Heads Precision RecallF1
1 Head74.974.574.4
4 Heads76.976.876.8
8 Heads77.477.277.2
", "num": null } } } }