{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T07:13:00.689780Z" }, "title": "Does Commonsense help in detecting Sarcasm?", "authors": [ { "first": "Somnath", "middle": [], "last": "Basu", "suffix": "", "affiliation": {}, "email": "somnath@cs.unc.edu" }, { "first": "Roy", "middle": [], "last": "Chowdhury", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Snigdha", "middle": [], "last": "Chaturvedi", "suffix": "", "affiliation": {}, "email": "snigdha@cs.unc.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Sarcasm detection is important for several NLP tasks such as sentiment identification in product reviews, user feedback, and online forums. It is a challenging task requiring a deep understanding of language, context, and world knowledge. In this paper, we investigate whether incorporating commonsense knowledge helps in sarcasm detection. For this, we incorporate commonsense knowledge into the prediction process using a graph convolution network with pre-trained language model embeddings as input. Our experiments with three sarcasm detection datasets indicate that the approach does not outperform the baseline model. We perform an exhaustive set of experiments to analyze where commonsense support adds value and where it hurts classification.", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "Sarcasm detection is important for several NLP tasks such as sentiment identification in product reviews, user feedback, and online forums. It is a challenging task requiring a deep understanding of language, context, and world knowledge. In this paper, we investigate whether incorporating commonsense knowledge helps in sarcasm detection. For this, we incorporate commonsense knowledge into the prediction process using a graph convolution network with pre-trained language model embeddings as input. Our experiments with three sarcasm detection datasets indicate that the approach does not outperform the baseline model. We perform an exhaustive set of experiments to analyze where commonsense support adds value and where it hurts classification.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "The topic of sarcasm has received attention in various research fields like linguistics (Utsumi, 2000) , psychology (Gibbs, 1986; Kreuz and Glucksberg, 1989) and the cognitive sciences (Gibbs Jr et al., 2007) . Identifying sarcasm is essential to understanding the opinion and intent of a user in downstream tasks like opinion mining, sentiment classification, etc. Initial approaches for this task (Kreuz and Glucksberg, 1989 ) mostly relied on handcrafted features to capture the lexical and contextual information. On similar lines, the efficacy of special characters, emojis and n-gram features in the discrimination task have also been studied (Carvalho et al., 2009; Lukin and Walker, 2013) .", "cite_spans": [ { "start": 88, "end": 102, "text": "(Utsumi, 2000)", "ref_id": "BIBREF29" }, { "start": 116, "end": 129, "text": "(Gibbs, 1986;", "ref_id": "BIBREF13" }, { "start": 130, "end": 157, "text": "Kreuz and Glucksberg, 1989)", "ref_id": "BIBREF20" }, { "start": 185, "end": 208, "text": "(Gibbs Jr et al., 2007)", "ref_id": "BIBREF14" }, { "start": 399, "end": 426, "text": "(Kreuz and Glucksberg, 1989", "ref_id": "BIBREF20" }, { "start": 649, "end": 672, "text": "(Carvalho et al., 2009;", "ref_id": "BIBREF4" }, { "start": 673, "end": 696, "text": "Lukin and Walker, 2013)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction & Related Work", "sec_num": "1" }, { "text": "In recent years, this task has gained traction in the machine learning and computational linguistic community (Davidov et al., 2010; Gonz\u00e1lez-Ib\u00e1\u00f1ez et al., 2011; Riloff et al., 2013; Maynard and Greenwood, 2014; Wallace et al., 2014; \"I loved the movie so much that I left during the interval\"", "cite_spans": [ { "start": 110, "end": 132, "text": "(Davidov et al., 2010;", "ref_id": "BIBREF8" }, { "start": 133, "end": 162, "text": "Gonz\u00e1lez-Ib\u00e1\u00f1ez et al., 2011;", "ref_id": "BIBREF15" }, { "start": 163, "end": 183, "text": "Riloff et al., 2013;", "ref_id": "BIBREF27" }, { "start": 184, "end": 212, "text": "Maynard and Greenwood, 2014;", "ref_id": "BIBREF22" }, { "start": 213, "end": 234, "text": "Wallace et al., 2014;", "ref_id": "BIBREF32" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction & Related Work", "sec_num": "1" }, { "text": "to watch the movie to watch it to go to the theatre to be fun to go home to be alone to watch something else Before the event (xWant) After the event (xEffect) Figure 1 : COMET output for the sentence \"I loved the movie so much that I left during the interval\". The commonsense sequences capture the contrast between intent and action of the subject. Joshi et al., 2015; Muresan et al., 2016; Amir et al., 2016; Mishra et al., 2016; Ghosh and Veale, 2017; Chakrabarty et al., 2020) . Several approaches have studied the role of context in this sarcasm detection task (Ghosh et al., 2020) . However, none of the previous works have explored the idea of incorporating commonsense knowledge in sarcasm detection. Common sense has been used in several natural-language based tasks like controllable story generation (Zhang et al., 2020; Brahman and Chaturvedi, 2020) , sentence classification (Chen et al., 2019 ), question answering (Dzendzik et al., 2020) , natural language inference (K M et al., 2018; Wang et al., 2019) and other related tasks but not for sarcasm detection. We hypothesize that commonsense knowledge, capturing general beliefs and world knowledge, can prove instrumental in understanding sarcasm. For example in Figure 1 , for the event \"I loved the movie so much that I left during the interval\" (an example of sarcasm with polarity contrast), we show how commonsense is able to capture the contrast between the intentions of the subject before and during the event. Incorporating such commonsense knowledge ideally should make it easier for the learning model to detect sarcasm where it is not apparent from the input. With this motivation, we study the utility of common sense information for sarcasm detection. For this, we leverage COMET (Bosselut et al., 2019) to extract the relevant social commonsense information for a sentence. Given an event, COMET provides likely scenarios relating to various attributes like intent of the subject, effect on the object etc.", "cite_spans": [ { "start": 126, "end": 133, "text": "(xWant)", "ref_id": null }, { "start": 351, "end": 370, "text": "Joshi et al., 2015;", "ref_id": "BIBREF17" }, { "start": 371, "end": 392, "text": "Muresan et al., 2016;", "ref_id": "BIBREF25" }, { "start": 393, "end": 411, "text": "Amir et al., 2016;", "ref_id": "BIBREF0" }, { "start": 412, "end": 432, "text": "Mishra et al., 2016;", "ref_id": "BIBREF23" }, { "start": 433, "end": 455, "text": "Ghosh and Veale, 2017;", "ref_id": "BIBREF10" }, { "start": 456, "end": 481, "text": "Chakrabarty et al., 2020)", "ref_id": null }, { "start": 567, "end": 587, "text": "(Ghosh et al., 2020)", "ref_id": "BIBREF12" }, { "start": 812, "end": 832, "text": "(Zhang et al., 2020;", "ref_id": "BIBREF34" }, { "start": 833, "end": 862, "text": "Brahman and Chaturvedi, 2020)", "ref_id": "BIBREF3" }, { "start": 889, "end": 907, "text": "(Chen et al., 2019", "ref_id": "BIBREF7" }, { "start": 930, "end": 953, "text": "(Dzendzik et al., 2020)", "ref_id": null }, { "start": 983, "end": 1001, "text": "(K M et al., 2018;", "ref_id": "BIBREF18" }, { "start": 1002, "end": 1020, "text": "Wang et al., 2019)", "ref_id": "BIBREF33" }, { "start": 1761, "end": 1784, "text": "(Bosselut et al., 2019)", "ref_id": "BIBREF2" } ], "ref_spans": [ { "start": 160, "end": 168, "text": "Figure 1", "ref_id": null }, { "start": 1230, "end": 1238, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction & Related Work", "sec_num": "1" }, { "text": "We use a GCN (Kipf and Welling, 2017) based model for infusing commonsense knowledge in the sarcasm detection task. Our experiments reveal that the commonsense augmented model performs at par with the baseline model. We perform an array of analysis experiments to identify where the commonsense infused model outperforms the baseline and where it fails.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction & Related Work", "sec_num": "1" }, { "text": "We use a graph convolution-based model to enable incorporation of COMET sequences given an input sentence. The sentence representations are retrieved from the pre-trained encoder of our baseline model. Our baseline model consists of a Transformer (Vaswani et al., 2017) based Distil-BERT (Sanh et al., 2019) is a light-weight encoder, which enables faster training, while achieving similar performance as other Transformer based encoders. The model is shown in Figure 2 . For every input instance, a graph is formed with edges between the input sentence and COMET sequences. No edges are present between individual COMET sequences. Sentence embeddings retrieved from the baseline DistilBERT form the initial graph embeddings. The intuition behind leveraging a graph-based architecture was to enable information flow between the representations of the input sentence and COMET sequences, thereby reducing the domain discrepancy between them.", "cite_spans": [ { "start": 247, "end": 269, "text": "(Vaswani et al., 2017)", "ref_id": "BIBREF31" }, { "start": 288, "end": 307, "text": "(Sanh et al., 2019)", "ref_id": "BIBREF28" } ], "ref_spans": [ { "start": 461, "end": 469, "text": "Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Model", "sec_num": "2" }, { "text": "The graph is then fed into a GraphSage (Hamilton et al., 2017) network which produces the node embedding vector V \u2208 IR (M +1)\u00d7N , where M is the number of COMET sequences and N is the output dimension of the GCN. The node embedding vector V is then forwarded to a fully connected neural network layer to produce the final output. In section 4, we experiment with different edge configurations and observe how each edge configuration affects the downstream performance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "2" }, { "text": "We experimented with another model that incorporated COMET sequences with an attentionmechanism. In that model, the representation of the input sentence was concatenated with an aggregate representation of the COMET sequences, formed in an attentive fashion. Its performance was not better than the GCN-based model, so we do not describe it here. We experiment on the Reddit dataset of the shared task introduced by Ghosh et al. (2020). The statistics of the datasets are specified in Table 1 . All the aforementioned datasets are balanced. We report our results by randomly splitting into training and test set, and averaging the accuracy over 5 iterations. In our experiments, we incorporate a subset of COMET predicates (xWant and xEffect) related to the subject in a sentence.", "cite_spans": [], "ref_spans": [ { "start": 485, "end": 492, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Model", "sec_num": "2" }, { "text": "We report the classification accuracy of the models for all datasets in Table 2 . The baseline denotes the DistilBERT performance. We see a high performance in the News headline dataset where the sentences are self-contained and language is not noisy. We see relatively lower performance of the baseline in FigLang 2020 Reddit, where we ignored the available context. Performance in SemEval dataset is low due to noisy tweets.", "cite_spans": [], "ref_spans": [ { "start": 72, "end": 79, "text": "Table 2", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Results", "sec_num": "4" }, { "text": "We conduct an ablation study with three configurations of the graph edges (a) bidirectional edges (b) edges from input \u2192 COMET sequences and (c) edges from COMET sequences \u2192 input. The results of the GCN-based model in different settings are shown in We examine whether the COMET representations leverage information from the input in the GCN setup by removing the input sentence representation before the FFNN module (shown in Figure 2 ) and experimenting with different edgeconfigurations. In Table 3 , we observe a significant performance dip with COMET\u2192input setup. This illustrates that the information flowing from input sentence to COMET sequences is more relevant.", "cite_spans": [], "ref_spans": [ { "start": 428, "end": 436, "text": "Figure 2", "ref_id": "FIGREF1" }, { "start": 495, "end": 502, "text": "Table 3", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Results", "sec_num": "4" }, { "text": "We also measure the share of instances in the test set having the same predicted label from the baseline and the model. We observe a significant overlap (>90%) between the predictions of the baseline and the proposed model across all datasets in Table 4 , illustrating that the model isn't able to tackle new instances. Occluded Element \u2206 Input sentence 27.99% COMET sequences 1.38% Table 5 : Confidence change when different segments of the input are occluded. \u2206 denotes the change in confidence when different parts of the input is occluded. ", "cite_spans": [], "ref_spans": [ { "start": 246, "end": 253, "text": "Table 4", "ref_id": "TABREF6" }, { "start": 383, "end": 390, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "Results", "sec_num": "4" }, { "text": "We perform saliency tests to investigate whether the model is reliant on commonsense sequences while taking decisions. (a) Gradient-based saliency (Bastings and Filippova, 2020 ) measure for a feature x i given an output class c is computed as \u2022) is the loss function. The saliency map is shown in Figure 3 . The saliency map vector has a dimension of 3 \u00d7 768, where the first row showcases the saliency values of the features corresponding to the input sentence while the remaining two rows correspond to the saliency of COMET sequences. For better visualization, all values are normalized between 0-1 and average pooling is performed on adjacent blocks of 8 to form a vector of dimension 3 \u00d7 192. From Figure 3 , it is evident that the model learns to identify important input features but assigns similar saliency values to all COMET features. ing a part of the input and observing the change in the output probability vector. We occlude the input representation and COMET representations respectively as shown in Figure 4 . The occlusion metric (Bastings and Filippova, 2020) Table 5 reports the results of this test. We observe that occluding the input sentence leads to a significant change in the output confidence while occluding the COMET sequences has little impact. These tests demonstrate that the model is more reliant on the input sentence and less on the COMET sequences for making the prediction.", "cite_spans": [ { "start": 147, "end": 176, "text": "(Bastings and Filippova, 2020", "ref_id": "BIBREF1" }, { "start": 1049, "end": 1079, "text": "(Bastings and Filippova, 2020)", "ref_id": "BIBREF1" } ], "ref_spans": [ { "start": 244, "end": 246, "text": "\u2022)", "ref_id": null }, { "start": 298, "end": 306, "text": "Figure 3", "ref_id": "FIGREF2" }, { "start": 704, "end": 712, "text": "Figure 3", "ref_id": "FIGREF2" }, { "start": 1017, "end": 1025, "text": "Figure 4", "ref_id": "FIGREF3" }, { "start": 1080, "end": 1087, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "Saliency Test", "sec_num": "5" }, { "text": "\u2207 x i L(y, f (x)) \u2022 x i , where L(\u2022,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Saliency Test", "sec_num": "5" }, { "text": "is defined as E x\u223cD [|f c (x) \u2212 f c (x|x i = 0)|].", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Saliency Test", "sec_num": "5" }, { "text": "In this section, we try to uncover why COMET sequences don't help in the sarcasm detection task. In order to identify instances where commonsense incorporation hurts the performance, we focus on samples where the model's prediction is wrong but the baseline is correct. Among these samples, we measure how many were non-sarcastic by defining a new measure non-sarcastic class coverage,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Efficacy of Commonsense", "sec_num": "6" }, { "text": "C N S GCN = |{x|x \u2208 S B GCN , l(x) = N S}| |S B GCN |", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Efficacy of Commonsense", "sec_num": "6" }, { "text": "where S B GCN is the set of samples in which the model predicted incorrectly while the baseline was correct, l(\u2022) is the oracle function which returns the true label of an input instance x, and N S denotes the non-sarcastic class label. Results in Ta- ble 6 demonstrate a high value of C N S GCN across all datasets, this indicates that the large fraction of the instances where the model was incorrect but the baseline was correct were non-sarcastic. After surveying non-sarcastic instances we infer that commonsense knowledge fails to explain non-sarcastic samples and is present as irrelevant context hurting downstream performance (Petroni et al., 2020) .", "cite_spans": [ { "start": 635, "end": 657, "text": "(Petroni et al., 2020)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Efficacy of Commonsense", "sec_num": "6" }, { "text": "There are cases where the prediction failed either due to noisy input (prevalent in the Twitter based SemEval dataset) or subtle play of words which COMET sequences fail to explain.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Efficacy of Commonsense", "sec_num": "6" }, { "text": "In order to investigate the utility of commonsense for specific type of sarcasm, we form a subset of the SemEval Irony dataset with samples only from irony with polarity contrast and non-sarcastic class by leveraging labels from the secondary Se-mEval task (mentioned in Section 3). C N S GCN for the new dataset is 57.1%, a significant reduction from the 64.6% in SemEval dataset in Table 6 . We infer that commonsense is only useful in detecting sarcasm with polarity contrast but struggles with other types of sarcasm.", "cite_spans": [], "ref_spans": [ { "start": 384, "end": 391, "text": "Table 6", "ref_id": "TABREF7" } ], "eq_spans": [], "section": "Efficacy of Commonsense", "sec_num": "6" }, { "text": "In this section, we analyze a few examples shown in Table 7 and observe whether the COMET sequences are helpful in detecting sarcasm. We have anonymized any twitter handle with \"@usertag\" to prevent any leak of private information.", "cite_spans": [], "ref_spans": [ { "start": 52, "end": 59, "text": "Table 7", "ref_id": "TABREF9" } ], "eq_spans": [], "section": "Qualitative Analysis", "sec_num": "7" }, { "text": "\u2022 In the first example of Table 7 , the input sentence is non-sarcastic. Retrieved commonsense sequences don't capture any information that may help in prediction.", "cite_spans": [], "ref_spans": [ { "start": 26, "end": 33, "text": "Table 7", "ref_id": "TABREF9" } ], "eq_spans": [], "section": "Qualitative Analysis", "sec_num": "7" }, { "text": "\u2022 In several instances, a sentence is sarcastic due to a subtle play of words or use of language. The commonsense based model struggles in such scenarios as COMET sequences cannot explain such events as shown in the second instance of Table 7 .", "cite_spans": [], "ref_spans": [ { "start": 235, "end": 242, "text": "Table 7", "ref_id": "TABREF9" } ], "eq_spans": [], "section": "Qualitative Analysis", "sec_num": "7" }, { "text": "\u2022 In the third example of Table 7 , we show that COMET sequences are able to perfectly capture the contrast between the intention and effect on the person.", "cite_spans": [], "ref_spans": [ { "start": 26, "end": 33, "text": "Table 7", "ref_id": "TABREF9" } ], "eq_spans": [], "section": "Qualitative Analysis", "sec_num": "7" }, { "text": "\u2022 In rare cases like the fourth instance of Table 7 , which is an example of irony with polarity contrast. It is still difficult for the commonsense model to explain the satire.", "cite_spans": [], "ref_spans": [ { "start": 44, "end": 51, "text": "Table 7", "ref_id": "TABREF9" } ], "eq_spans": [], "section": "Qualitative Analysis", "sec_num": "7" }, { "text": "In this paper, we proposed the idea of integrating commonsense knowledge in the task of sarcasm detection. We observe that COMET infused model performs at par with the baseline. Through saliency tests, we observe that the model is less reliant on the commonsense representations in many cases. From our analysis, we infer that commonsense is most effective in identifying sarcasm with polarity contrast but fails to explain non-sarcastic samples or other types of sarcasm effectively, which hurts the overall performance. In the future, we will explore the utility of other forms of external knowledge such as factual world knowledge for sarcasm detection. We will also try to leverage commonsense to explain why a certain remark is sarcastic.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future work", "sec_num": "8" }, { "text": "Experimental SetupWe evaluate the models on three datasets (a) Irony detection SemEval task: Van Hee et al. (2018) conducted a SemEval task for irony detection considering an utterance in isolation. They also released a secondary task where the sarcastic samples were classified into three broad categories: verbal irony with polarity contrast, situational irony, and others. (b) News Headlines dataset(Misra and", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Modelling context with user embeddings for sarcasm detection in social media", "authors": [ { "first": "Silvio", "middle": [], "last": "Amir", "suffix": "" }, { "first": "Byron", "middle": [ "C" ], "last": "Wallace", "suffix": "" }, { "first": "Hao", "middle": [], "last": "Lyu", "suffix": "" }, { "first": "Paula", "middle": [], "last": "Carvalho", "suffix": "" }, { "first": "M\u00e1rio", "middle": [ "J" ], "last": "Silva", "suffix": "" } ], "year": 2016, "venue": "Proceedings of The 20th SIGNLL Conference on Computational Natural Language Learning", "volume": "", "issue": "", "pages": "167--177", "other_ids": { "DOI": [ "10.18653/v1/K16-1017" ] }, "num": null, "urls": [], "raw_text": "Silvio Amir, Byron C. Wallace, Hao Lyu, Paula Car- valho, and M\u00e1rio J. Silva. 2016. Modelling context with user embeddings for sarcasm detection in social media. In Proceedings of The 20th SIGNLL Con- ference on Computational Natural Language Learn- ing, pages 167-177, Berlin, Germany. Association for Computational Linguistics.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "The elephant in the interpretability room: Why use attention as explanation when we have saliency methods?", "authors": [ { "first": "Jasmijn", "middle": [], "last": "Bastings", "suffix": "" }, { "first": "Katja", "middle": [], "last": "Filippova", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the Third BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP", "volume": "", "issue": "", "pages": "149--155", "other_ids": { "DOI": [ "10.18653/v1/2020.blackboxnlp-1.14" ] }, "num": null, "urls": [], "raw_text": "Jasmijn Bastings and Katja Filippova. 2020. The ele- phant in the interpretability room: Why use atten- tion as explanation when we have saliency methods? In Proceedings of the Third BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP, pages 149-155, Online. Association for Com- putational Linguistics.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "COMET: Commonsense transformers for automatic knowledge graph construction", "authors": [ { "first": "Antoine", "middle": [], "last": "Bosselut", "suffix": "" }, { "first": "Hannah", "middle": [], "last": "Rashkin", "suffix": "" }, { "first": "Maarten", "middle": [], "last": "Sap", "suffix": "" }, { "first": "Chaitanya", "middle": [], "last": "Malaviya", "suffix": "" }, { "first": "Asli", "middle": [], "last": "Celikyilmaz", "suffix": "" }, { "first": "Yejin", "middle": [], "last": "Choi", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "4762--4779", "other_ids": { "DOI": [ "10.18653/v1/P19-1470" ] }, "num": null, "urls": [], "raw_text": "Antoine Bosselut, Hannah Rashkin, Maarten Sap, Chai- tanya Malaviya, Asli Celikyilmaz, and Yejin Choi. 2019. COMET: Commonsense transformers for au- tomatic knowledge graph construction. In Proceed- ings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4762-4779, Florence, Italy. Association for Computational Lin- guistics.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Modeling protagonist emotions for emotion-aware storytelling", "authors": [ { "first": "Faeze", "middle": [], "last": "Brahman", "suffix": "" }, { "first": "Snigdha", "middle": [], "last": "Chaturvedi", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "5277--5294", "other_ids": {}, "num": null, "urls": [], "raw_text": "Faeze Brahman and Snigdha Chaturvedi. 2020. Mod- eling protagonist emotions for emotion-aware story- telling. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 5277-5294.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Clues for detecting irony in user-generated contents: oh", "authors": [ { "first": "Paula", "middle": [], "last": "Carvalho", "suffix": "" }, { "first": "Lu\u00eds", "middle": [], "last": "Sarmento", "suffix": "" }, { "first": "J", "middle": [], "last": "M\u00e1rio", "suffix": "" }, { "first": "Eug\u00e9nio De", "middle": [], "last": "Silva", "suffix": "" }, { "first": "", "middle": [], "last": "Oliveira", "suffix": "" } ], "year": 2009, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Paula Carvalho, Lu\u00eds Sarmento, M\u00e1rio J Silva, and Eug\u00e9nio De Oliveira. 2009. Clues for detecting irony in user-generated contents: oh...", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "!! it's\" so easy", "authors": [], "year": null, "venue": "Proceedings of the 1st international CIKM workshop on Topic-sentiment analysis for mass opinion", "volume": "", "issue": "", "pages": "53--56", "other_ids": {}, "num": null, "urls": [], "raw_text": "!! it's\" so easy\";-. In Proceedings of the 1st international CIKM workshop on Topic-sentiment analysis for mass opinion, pages 53-56.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Smaranda Muresan, and Nanyun Peng. 2020. r 3 : Reverse, retrieve, and rank for sarcasm generation with commonsense knowledge", "authors": [ { "first": "Tuhin", "middle": [], "last": "Chakrabarty", "suffix": "" }, { "first": "Debanjan", "middle": [], "last": "Ghosh", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2004.13248" ] }, "num": null, "urls": [], "raw_text": "Tuhin Chakrabarty, Debanjan Ghosh, Smaranda Mure- san, and Nanyun Peng. 2020. r 3 : Reverse, retrieve, and rank for sarcasm generation with commonsense knowledge. arXiv preprint arXiv:2004.13248.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Deep short text classification with knowledge powered attention", "authors": [ { "first": "Jindong", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Yizhou", "middle": [], "last": "Hu", "suffix": "" }, { "first": "Jingping", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Yanghua", "middle": [], "last": "Xiao", "suffix": "" }, { "first": "Haiyun", "middle": [], "last": "Jiang", "suffix": "" } ], "year": 2019, "venue": "The Thirty-Third AAAI Conference on Artificial Intelligence, AAAI 2019, The Thirty-First Innovative Applications of Artificial Intelligence Conference, IAAI 2019, The Ninth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2019, Honolulu", "volume": "", "issue": "", "pages": "6252--6259", "other_ids": { "DOI": [ "10.1609/aaai.v33i01.33016252" ] }, "num": null, "urls": [], "raw_text": "Jindong Chen, Yizhou Hu, Jingping Liu, Yanghua Xiao, and Haiyun Jiang. 2019. Deep short text clas- sification with knowledge powered attention. In The Thirty-Third AAAI Conference on Artificial Intelli- gence, AAAI 2019, The Thirty-First Innovative Ap- plications of Artificial Intelligence Conference, IAAI 2019, The Ninth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2019, Hon- olulu, Hawaii, USA, January 27 -February 1, 2019, pages 6252-6259. AAAI Press.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Semi-supervised recognition of sarcasm in Twitter and Amazon", "authors": [ { "first": "Dmitry", "middle": [], "last": "Davidov", "suffix": "" }, { "first": "Oren", "middle": [], "last": "Tsur", "suffix": "" }, { "first": "Ari", "middle": [], "last": "Rappoport", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the Fourteenth Conference on Computational Natural Language Learning", "volume": "", "issue": "", "pages": "107--116", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dmitry Davidov, Oren Tsur, and Ari Rappoport. 2010. Semi-supervised recognition of sarcasm in Twitter and Amazon. In Proceedings of the Fourteenth Con- ference on Computational Natural Language Learn- ing, pages 107-116, Uppsala, Sweden. Association for Computational Linguistics.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "2020. Q. can knowledge graphs be used to answer boolean questions? a. it's complicated!", "authors": [ { "first": "Daria", "middle": [], "last": "Dzendzik", "suffix": "" }, { "first": "Carl", "middle": [], "last": "Vogel", "suffix": "" }, { "first": "Jennifer", "middle": [], "last": "Foster", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daria Dzendzik, Carl Vogel, and Jennifer Foster. 2020. Q. can knowledge graphs be used to answer boolean questions? a. it's complicated! Association for Computational Linguistics (ACL).", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Magnets for sarcasm: Making sarcasm detection timely, contextual and very personal", "authors": [ { "first": "Aniruddha", "middle": [], "last": "Ghosh", "suffix": "" }, { "first": "Tony", "middle": [], "last": "Veale", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "482--491", "other_ids": { "DOI": [ "10.18653/v1/D17-1050" ] }, "num": null, "urls": [], "raw_text": "Aniruddha Ghosh and Tony Veale. 2017. Magnets for sarcasm: Making sarcasm detection timely, con- textual and very personal. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 482-491, Copenhagen, Denmark. Association for Computational Linguis- tics.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Sarcastic or not: Word embeddings to predict the literal or sarcastic meaning of words", "authors": [ { "first": "Debanjan", "middle": [], "last": "Ghosh", "suffix": "" }, { "first": "Weiwei", "middle": [], "last": "Guo", "suffix": "" }, { "first": "Smaranda", "middle": [], "last": "Muresan", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1003--1012", "other_ids": { "DOI": [ "10.18653/v1/D15-1116" ] }, "num": null, "urls": [], "raw_text": "Debanjan Ghosh, Weiwei Guo, and Smaranda Muresan. 2015. Sarcastic or not: Word embeddings to predict the literal or sarcastic meaning of words. In Pro- ceedings of the 2015 Conference on Empirical Meth- ods in Natural Language Processing, pages 1003- 1012, Lisbon, Portugal. Association for Computa- tional Linguistics.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "A report on the 2020 sarcasm detection shared task", "authors": [ { "first": "Debanjan", "middle": [], "last": "Ghosh", "suffix": "" }, { "first": "Avijit", "middle": [], "last": "Vajpayee", "suffix": "" }, { "first": "Smaranda", "middle": [], "last": "Muresan", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the Second Workshop on Figurative Language Processing", "volume": "", "issue": "", "pages": "1--11", "other_ids": { "DOI": [ "10.18653/v1/2020.figlang-1.1" ] }, "num": null, "urls": [], "raw_text": "Debanjan Ghosh, Avijit Vajpayee, and Smaranda Mure- san. 2020. A report on the 2020 sarcasm detection shared task. In Proceedings of the Second Work- shop on Figurative Language Processing, pages 1- 11, Online. Association for Computational Linguis- tics.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "On the psycholinguistics of sarcasm", "authors": [ { "first": "W", "middle": [], "last": "Raymond", "suffix": "" }, { "first": "", "middle": [], "last": "Gibbs", "suffix": "" } ], "year": 1986, "venue": "Journal of experimental psychology: general", "volume": "115", "issue": "1", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Raymond W Gibbs. 1986. On the psycholinguistics of sarcasm. Journal of experimental psychology: gen- eral, 115(1):3.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Irony in language and thought: A cognitive science reader", "authors": [ { "first": "Raymond", "middle": [ "W" ], "last": "Raymond W Gibbs", "suffix": "" }, { "first": "Herbert L", "middle": [], "last": "Gibbs", "suffix": "" }, { "first": "", "middle": [], "last": "Colston", "suffix": "" } ], "year": 2007, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Raymond W Gibbs Jr, Raymond W Gibbs, and Her- bert L Colston. 2007. Irony in language and thought: A cognitive science reader. Psychology Press.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Identifying sarcasm in Twitter: A closer look", "authors": [ { "first": "Roberto", "middle": [], "last": "Gonz\u00e1lez-Ib\u00e1\u00f1ez", "suffix": "" }, { "first": "Smaranda", "middle": [], "last": "Muresan", "suffix": "" }, { "first": "Nina", "middle": [], "last": "Wacholder", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "581--586", "other_ids": {}, "num": null, "urls": [], "raw_text": "Roberto Gonz\u00e1lez-Ib\u00e1\u00f1ez, Smaranda Muresan, and Nina Wacholder. 2011. Identifying sarcasm in Twit- ter: A closer look. In Proceedings of the 49th An- nual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 581-586, Portland, Oregon, USA. Association for Computational Linguistics.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Inductive representation learning on large graphs", "authors": [ { "first": "William", "middle": [ "L" ], "last": "Hamilton", "suffix": "" }, { "first": "Zhitao", "middle": [], "last": "Ying", "suffix": "" }, { "first": "Jure", "middle": [], "last": "Leskovec", "suffix": "" } ], "year": 2017, "venue": "Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems", "volume": "", "issue": "", "pages": "1024--1034", "other_ids": {}, "num": null, "urls": [], "raw_text": "William L. Hamilton, Zhitao Ying, and Jure Leskovec. 2017. Inductive representation learning on large graphs. In Advances in Neural Information Process- ing Systems 30: Annual Conference on Neural In- formation Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pages 1024-1034.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Harnessing context incongruity for sarcasm detection", "authors": [ { "first": "Aditya", "middle": [], "last": "Joshi", "suffix": "" }, { "first": "Vinita", "middle": [], "last": "Sharma", "suffix": "" }, { "first": "Pushpak", "middle": [], "last": "Bhattacharyya", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing", "volume": "2", "issue": "", "pages": "757--762", "other_ids": { "DOI": [ "10.3115/v1/P15-2124" ] }, "num": null, "urls": [], "raw_text": "Aditya Joshi, Vinita Sharma, and Pushpak Bhat- tacharyya. 2015. Harnessing context incongruity for sarcasm detection. In Proceedings of the 53rd An- nual Meeting of the Association for Computational Linguistics and the 7th International Joint Confer- ence on Natural Language Processing (Volume 2: Short Papers), pages 757-762, Beijing, China. As- sociation for Computational Linguistics.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Learning beyond datasets: Knowledge graph augmented neural networks for natural language processing", "authors": [ { "first": "K M", "middle": [], "last": "Annervaz", "suffix": "" }, { "first": "Somnath", "middle": [], "last": "Basu Roy Chowdhury", "suffix": "" }, { "first": "Ambedkar", "middle": [], "last": "Dukkipati", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "313--322", "other_ids": { "DOI": [ "10.18653/v1/N18-1029" ] }, "num": null, "urls": [], "raw_text": "Annervaz K M, Somnath Basu Roy Chowdhury, and Ambedkar Dukkipati. 2018. Learning beyond datasets: Knowledge graph augmented neural net- works for natural language processing. In Proceed- ings of the 2018 Conference of the North Ameri- can Chapter of the Association for Computational Linguistics: Human Language Technologies, Vol- ume 1 (Long Papers), pages 313-322, New Orleans, Louisiana. Association for Computational Linguis- tics.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Semisupervised classification with graph convolutional networks", "authors": [ { "first": "N", "middle": [], "last": "Thomas", "suffix": "" }, { "first": "Max", "middle": [], "last": "Kipf", "suffix": "" }, { "first": "", "middle": [], "last": "Welling", "suffix": "" } ], "year": 2017, "venue": "5th International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thomas N. Kipf and Max Welling. 2017. Semi- supervised classification with graph convolutional networks. In 5th International Conference on Learn- ing Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings. OpenReview.net.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "How to be sarcastic: The echoic reminder theory of verbal irony", "authors": [ { "first": "J", "middle": [], "last": "Roger", "suffix": "" }, { "first": "Sam", "middle": [], "last": "Kreuz", "suffix": "" }, { "first": "", "middle": [], "last": "Glucksberg", "suffix": "" } ], "year": 1989, "venue": "Journal of experimental psychology: General", "volume": "118", "issue": "4", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Roger J Kreuz and Sam Glucksberg. 1989. How to be sarcastic: The echoic reminder theory of verbal irony. Journal of experimental psychology: General, 118(4):374.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Really? well. apparently bootstrapping improves the performance of sarcasm and nastiness classifiers for online dialogue", "authors": [ { "first": "Stephanie", "middle": [], "last": "Lukin", "suffix": "" }, { "first": "Marilyn", "middle": [], "last": "Walker", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the Workshop on Language Analysis in Social Media", "volume": "", "issue": "", "pages": "30--40", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stephanie Lukin and Marilyn Walker. 2013. Really? well. apparently bootstrapping improves the perfor- mance of sarcasm and nastiness classifiers for online dialogue. In Proceedings of the Workshop on Lan- guage Analysis in Social Media, pages 30-40, At- lanta, Georgia. Association for Computational Lin- guistics.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Who cares about sarcastic tweets? investigating the impact of sarcasm on sentiment analysis", "authors": [ { "first": "Diana", "middle": [], "last": "Maynard", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Greenwood", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14)", "volume": "", "issue": "", "pages": "4238--4243", "other_ids": {}, "num": null, "urls": [], "raw_text": "Diana Maynard and Mark Greenwood. 2014. Who cares about sarcastic tweets? investigating the im- pact of sarcasm on sentiment analysis. In Proceed- ings of the Ninth International Conference on Lan- guage Resources and Evaluation (LREC'14), pages 4238-4243, Reykjavik, Iceland. European Language Resources Association (ELRA).", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Harnessing cognitive features for sarcasm detection", "authors": [ { "first": "Abhijit", "middle": [], "last": "Mishra", "suffix": "" }, { "first": "Diptesh", "middle": [], "last": "Kanojia", "suffix": "" }, { "first": "Seema", "middle": [], "last": "Nagar", "suffix": "" }, { "first": "Kuntal", "middle": [], "last": "Dey", "suffix": "" }, { "first": "Pushpak", "middle": [], "last": "Bhattacharyya", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "1095--1104", "other_ids": { "DOI": [ "10.18653/v1/P16-1104" ] }, "num": null, "urls": [], "raw_text": "Abhijit Mishra, Diptesh Kanojia, Seema Nagar, Kuntal Dey, and Pushpak Bhattacharyya. 2016. Harnessing cognitive features for sarcasm detection. In Proceed- ings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Pa- pers), pages 1095-1104, Berlin, Germany. Associa- tion for Computational Linguistics.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Sarcasm detection using hybrid neural network", "authors": [ { "first": "Rishabh", "middle": [], "last": "Misra", "suffix": "" }, { "first": "Prahal", "middle": [], "last": "Arora", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1908.07414" ] }, "num": null, "urls": [], "raw_text": "Rishabh Misra and Prahal Arora. 2019. Sarcasm de- tection using hybrid neural network. arXiv preprint arXiv:1908.07414.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Identification of nonliteral language in social media: A case study on sarcasm", "authors": [ { "first": "Smaranda", "middle": [], "last": "Muresan", "suffix": "" }, { "first": "Roberto", "middle": [], "last": "Gonzalez-Ibanez", "suffix": "" }, { "first": "Debanjan", "middle": [], "last": "Ghosh", "suffix": "" }, { "first": "Nina", "middle": [], "last": "Wacholder", "suffix": "" } ], "year": 2016, "venue": "Journal of the Association for Information Science and Technology", "volume": "67", "issue": "11", "pages": "2725--2737", "other_ids": {}, "num": null, "urls": [], "raw_text": "Smaranda Muresan, Roberto Gonzalez-Ibanez, Deban- jan Ghosh, and Nina Wacholder. 2016. Identifica- tion of nonliteral language in social media: A case study on sarcasm. Journal of the Association for Information Science and Technology, 67(11):2725- 2737.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "How context affects language models' factual predictions", "authors": [ { "first": "Fabio", "middle": [], "last": "Petroni", "suffix": "" }, { "first": "Patrick", "middle": [], "last": "Lewis", "suffix": "" }, { "first": "Aleksandra", "middle": [], "last": "Piktus", "suffix": "" }, { "first": "Tim", "middle": [], "last": "Rockt\u00e4schel", "suffix": "" }, { "first": "Yuxiang", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Alexander", "middle": [ "H" ], "last": "Miller", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "Riedel", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2005.04611" ] }, "num": null, "urls": [], "raw_text": "Fabio Petroni, Patrick Lewis, Aleksandra Piktus, Tim Rockt\u00e4schel, Yuxiang Wu, Alexander H Miller, and Sebastian Riedel. 2020. How context affects lan- guage models' factual predictions. arXiv preprint arXiv:2005.04611.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Sarcasm as contrast between a positive sentiment and negative situation", "authors": [ { "first": "Ellen", "middle": [], "last": "Riloff", "suffix": "" }, { "first": "Ashequl", "middle": [], "last": "Qadir", "suffix": "" }, { "first": "Prafulla", "middle": [], "last": "Surve", "suffix": "" }, { "first": "Lalindra De", "middle": [], "last": "Silva", "suffix": "" }, { "first": "Nathan", "middle": [], "last": "Gilbert", "suffix": "" }, { "first": "Ruihong", "middle": [], "last": "Huang", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "704--714", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ellen Riloff, Ashequl Qadir, Prafulla Surve, Lalindra De Silva, Nathan Gilbert, and Ruihong Huang. 2013. Sarcasm as contrast between a positive sentiment and negative situation. In Proceedings of the 2013 Conference on Empirical Methods in Natural Lan- guage Processing, pages 704-714, Seattle, Washing- ton, USA. Association for Computational Linguis- tics.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter", "authors": [ { "first": "Victor", "middle": [], "last": "Sanh", "suffix": "" }, { "first": "Lysandre", "middle": [], "last": "Debut", "suffix": "" }, { "first": "Julien", "middle": [], "last": "Chaumond", "suffix": "" }, { "first": "Thomas", "middle": [], "last": "Wolf", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1910.01108" ] }, "num": null, "urls": [], "raw_text": "Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2019. Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Verbal irony as implicit display of ironic environment: Distinguishing ironic utterances from nonirony", "authors": [ { "first": "Akira", "middle": [], "last": "Utsumi", "suffix": "" } ], "year": 2000, "venue": "Journal of Pragmatics", "volume": "32", "issue": "12", "pages": "1777--1806", "other_ids": {}, "num": null, "urls": [], "raw_text": "Akira Utsumi. 2000. Verbal irony as implicit dis- play of ironic environment: Distinguishing ironic utterances from nonirony. Journal of Pragmatics, 32(12):1777-1806.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "SemEval-2018 task 3: Irony detection in English tweets", "authors": [ { "first": "Cynthia", "middle": [], "last": "Van Hee", "suffix": "" }, { "first": "Els", "middle": [], "last": "Lefever", "suffix": "" }, { "first": "V\u00e9ronique", "middle": [], "last": "Hoste", "suffix": "" } ], "year": 2018, "venue": "Proceedings of The 12th International Workshop on Semantic Evaluation", "volume": "", "issue": "", "pages": "39--50", "other_ids": { "DOI": [ "10.18653/v1/S18-1005" ] }, "num": null, "urls": [], "raw_text": "Cynthia Van Hee, Els Lefever, and V\u00e9ronique Hoste. 2018. SemEval-2018 task 3: Irony detection in En- glish tweets. In Proceedings of The 12th Interna- tional Workshop on Semantic Evaluation, pages 39- 50, New Orleans, Louisiana. Association for Com- putational Linguistics.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Attention is all you need", "authors": [ { "first": "Ashish", "middle": [], "last": "Vaswani", "suffix": "" }, { "first": "Noam", "middle": [], "last": "Shazeer", "suffix": "" }, { "first": "Niki", "middle": [], "last": "Parmar", "suffix": "" }, { "first": "Jakob", "middle": [], "last": "Uszkoreit", "suffix": "" }, { "first": "Llion", "middle": [], "last": "Jones", "suffix": "" }, { "first": "Aidan", "middle": [ "N" ], "last": "Gomez", "suffix": "" }, { "first": "Lukasz", "middle": [], "last": "Kaiser", "suffix": "" }, { "first": "Illia", "middle": [], "last": "Polosukhin", "suffix": "" } ], "year": 2017, "venue": "Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems", "volume": "", "issue": "", "pages": "5998--6008", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Pro- cessing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4- 9, 2017, Long Beach, CA, USA, pages 5998-6008.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Humans require context to infer ironic intent (so computers probably do, too)", "authors": [ { "first": "Byron", "middle": [ "C" ], "last": "Wallace", "suffix": "" }, { "first": "Laura", "middle": [], "last": "Do Kook Choe", "suffix": "" }, { "first": "Eugene", "middle": [], "last": "Kertz", "suffix": "" }, { "first": "", "middle": [], "last": "Charniak", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics", "volume": "2", "issue": "", "pages": "512--516", "other_ids": { "DOI": [ "10.3115/v1/P14-2084" ] }, "num": null, "urls": [], "raw_text": "Byron C. Wallace, Do Kook Choe, Laura Kertz, and Eugene Charniak. 2014. Humans require context to infer ironic intent (so computers probably do, too). In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 512-516, Baltimore, Mary- land. Association for Computational Linguistics.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Improving natural language inference using external knowledge in the science questions domain", "authors": [ { "first": "Xiaoyan", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Pavan", "middle": [], "last": "Kapanipathi", "suffix": "" }, { "first": "Ryan", "middle": [], "last": "Musa", "suffix": "" }, { "first": "Mo", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Kartik", "middle": [], "last": "Talamadupula", "suffix": "" }, { "first": "Ibrahim", "middle": [], "last": "Abdelaziz", "suffix": "" }, { "first": "Maria", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Achille", "middle": [], "last": "Fokoue", "suffix": "" }, { "first": "Bassem", "middle": [], "last": "Makni", "suffix": "" }, { "first": "Nicholas", "middle": [], "last": "Mattei", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the AAAI Conference on Artificial Intelligence", "volume": "33", "issue": "", "pages": "7208--7215", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xiaoyan Wang, Pavan Kapanipathi, Ryan Musa, Mo Yu, Kartik Talamadupula, Ibrahim Abdelaziz, Maria Chang, Achille Fokoue, Bassem Makni, Nicholas Mattei, et al. 2019. Improving natural lan- guage inference using external knowledge in the sci- ence questions domain. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 7208-7215.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Story completion with explicit modeling of commonsense knowledge", "authors": [ { "first": "Mingda", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Keren", "middle": [], "last": "Ye", "suffix": "" }, { "first": "Rebecca", "middle": [], "last": "Hwa", "suffix": "" }, { "first": "Adriana", "middle": [], "last": "Kovashka", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops", "volume": "", "issue": "", "pages": "376--377", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mingda Zhang, Keren Ye, Rebecca Hwa, and Adri- ana Kovashka. 2020. Story completion with explicit modeling of commonsense knowledge. In Proceed- ings of the IEEE/CVF Conference on Computer Vi- sion and Pattern Recognition Workshops, pages 376- 377.", "links": null } }, "ref_entries": { "FIGREF1": { "text": "Proposed model architecture. Representations of the input sentence along with two COMET sequences are retrieved from pre-trained DistilBERT that are used to initialize a GCN. Post training, node representations of the graph is passed through a fullyconnected neural network to generate the output.", "type_str": "figure", "num": null, "uris": null }, "FIGREF2": { "text": "Visualization of gradient-based saliency tests. Darker shade denotes lower absolute values. The first row shows the features corresponding to the input sentence, and the other two rows are features from COMET sequences xWant and xEffect. We observe that features from input sentence (first row) receive high saliency values.", "type_str": "figure", "num": null, "uris": null }, "FIGREF3": { "text": "Occlusion setup. First setup shows that the input sentence representation (first row) is occluded. Second setup commonsense sequence representations are occluded (second and third rows).", "type_str": "figure", "num": null, "uris": null }, "FIGREF4": { "text": "(b) Occlusion-based saliency test involves occlud-", "type_str": "figure", "num": null, "uris": null }, "TABREF1": { "content": "", "html": null, "type_str": "table", "num": null, "text": "Number of training samples in train/test split for each dataset." }, "TABREF3": { "content": "
Edge configurationPerformance
GCN (bidirectional)67.27%
GCN (COMET \u2192 input)55.00%
GCN (input \u2192 COMET)67.36%
", "html": null, "type_str": "table", "num": null, "text": "Accuracy of the baseline DistilBERT and GCN model (in various edge configurations). We do not observe any significant change in sarcasm detection performance with the incorporation of commonsense sequences." }, "TABREF4": { "content": "", "html": null, "type_str": "table", "num": null, "text": "Performance of the proposed model for different edge configurations. We observe a sharp performance drop in (COMET \u2192 input) configuration." }, "TABREF5": { "content": "
DatasetOverlap
News Headline99.5%
SemEval Irony91.6%
FigLang 2020 (Reddit)92.7%
", "html": null, "type_str": "table", "num": null, "text": "The performance of the GCN model is at par with the baseline and varying edge" }, "TABREF6": { "content": "", "html": null, "type_str": "table", "num": null, "text": "Test set overlap where the output label from the GCN and DistilBERT model is the same." }, "TABREF7": { "content": "
", "html": null, "type_str": "table", "num": null, "text": "C N S GCN statistic across different datasets." }, "TABREF9": { "content": "
", "html": null, "type_str": "table", "num": null, "text": "Example input instances along with their ground truth label and corresponding commonsense sentences retrieved from COMET. We analyze the utility of COMET sequences described as explanations." } } } }