{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T07:13:04.641458Z" }, "title": "An Investigation into the Contribution of Locally Aggregated Descriptors to Figurative Language Identification", "authors": [ { "first": "Sina", "middle": [ "Mahdipour" ], "last": "Saravani", "suffix": "", "affiliation": { "laboratory": "", "institution": "Colorado State University", "location": {} }, "email": "sinamps@colostate.edu" }, { "first": "Ritwik", "middle": [], "last": "Banerjee", "suffix": "", "affiliation": { "laboratory": "", "institution": "Stony Brook University", "location": {} }, "email": "rbanerjee@cs.stonybrook.edu" }, { "first": "Indrakshi", "middle": [], "last": "Ray", "suffix": "", "affiliation": { "laboratory": "", "institution": "Colorado State University", "location": {} }, "email": "indrakshi.ray@colostate.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "In natural language understanding, topics that touch upon figurative language and pragmatics are notably difficult. We probe a novel use of locally aggregated descriptors-specifically, an architecture called NeXtVLAD-motivated by its accomplishments in computer vision, achieve tremendous success in the FigLang2020 sarcasm detection task. The reported F 1 score of 93.1% is 14% higher than the next best result. We specifically investigate the extent to which the novel architecture is responsible for this boost, and find that it does not provide statistically significant benefits. Deep learning approaches are expensive, and we hope our insights highlighting the lack of benefits from introducing a resourceintensive component will aid future research to distill the effective elements from long and complex pipelines, thereby providing a boost to the wider research community.", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "In natural language understanding, topics that touch upon figurative language and pragmatics are notably difficult. We probe a novel use of locally aggregated descriptors-specifically, an architecture called NeXtVLAD-motivated by its accomplishments in computer vision, achieve tremendous success in the FigLang2020 sarcasm detection task. The reported F 1 score of 93.1% is 14% higher than the next best result. We specifically investigate the extent to which the novel architecture is responsible for this boost, and find that it does not provide statistically significant benefits. Deep learning approaches are expensive, and we hope our insights highlighting the lack of benefits from introducing a resourceintensive component will aid future research to distill the effective elements from long and complex pipelines, thereby providing a boost to the wider research community.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Natural language understanding often goes beyond the syntactic and semantic layers, and perhaps nowhere is this more palpable than in the use of figurative language. A better understanding of figurative language use, such as metaphors, irony, or sarcasm, can not only lead to advances in computational creativity (Veale, 2011; Kuznetsova et al., 2013) , but also in understanding social media content, where users often employ such pragmatic tools as irony or sarcasm (Reyes et al., 2013; Riloff et al., 2013) . This type of figurative language is difficult to identify, however, at least partly due to what the influential literary poet and critic William Empson called \"ambiguities\" (Empson, 1947) in the language. In particular, figurative language use with sarcasm or irony completely decouples -and even contrasts -the communicator's intent from the communicated content (Camp, 2012) , rendering shallow syntactic or semantic features unsuitable. The poor fit of such features is further exacerbated in social media posts due to the ubiquity of grammatical errors, hashtags, emojis, etc.", "cite_spans": [ { "start": 313, "end": 326, "text": "(Veale, 2011;", "ref_id": "BIBREF20" }, { "start": 327, "end": 351, "text": "Kuznetsova et al., 2013)", "ref_id": "BIBREF11" }, { "start": 468, "end": 488, "text": "(Reyes et al., 2013;", "ref_id": "BIBREF17" }, { "start": 489, "end": 509, "text": "Riloff et al., 2013)", "ref_id": "BIBREF18" }, { "start": 685, "end": 699, "text": "(Empson, 1947)", "ref_id": "BIBREF4" }, { "start": 876, "end": 888, "text": "(Camp, 2012)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The deeper, context-dependent inferential nature of figurative language, together with the poor fit of shallow syntactic and semantic features, makes deep neural networks a natural candidate for downstream NLP tasks like sarcasm detection (Ghosh and Veale, 2016) . Unfortunately, with increasing popularity of deep learning, the reliability of findings in publications that extensively employ deep learning can be expected, in general, to decrease (Pfeiffer and Hoffmann, 2009) . In light of this seminal empirical observation and the general difficulty of accurately identifying figurative language, it is reasonable to not expect outright success on a benchmark corpus simply based on the use of a deep network.", "cite_spans": [ { "start": 239, "end": 262, "text": "(Ghosh and Veale, 2016)", "ref_id": "BIBREF7" }, { "start": 448, "end": 477, "text": "(Pfeiffer and Hoffmann, 2009)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The concerns about reliability, and thus, about reproducibility, are particularly acute in deep learning. For instance, Reimers and Gurevych (2017) demonstrated that the hyperparameter settings have a significant impact on the final results obtained by a model. Crane (2018) further showed that other confounding factors such as variation of GPUs, the exact version of a framework, the randomness of a seed value provided to a learning algorithm, and the interaction between multiple such factors, can all impact the obtained results.", "cite_spans": [ { "start": 120, "end": 147, "text": "Reimers and Gurevych (2017)", "ref_id": "BIBREF16" }, { "start": 262, "end": 274, "text": "Crane (2018)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Beyond reproducibility, however, lies another pertinent factor: the use of increasingly complex pipelines where multiple sophisticated components are glued together for an important downstream NLP task. In such scenarios, it is not always clear which components within the complex system may be responsible for improved outcomes. A simple change in data preprocessing may lead to a significant difference in the final result, for example (Etaiwi and Naymat, 2017; Camacho-Collados and Pilehvar, 2018) . In publications that introduce complex NLP pipelines, however, such details have sometimes been omitted.", "cite_spans": [ { "start": 438, "end": 463, "text": "(Etaiwi and Naymat, 2017;", "ref_id": "BIBREF5" }, { "start": 464, "end": 500, "text": "Camacho-Collados and Pilehvar, 2018)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Context-1 The [govt] just confiscated a $180 million boat shipment of cocaine from drug traffickers. Sarcastic Context-2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Turn Tweet Label", "sec_num": null }, { "text": "People think 5 tonnes is not a load of cocaine.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Turn Tweet Label", "sec_num": null }, { "text": "Man! I've seen more than that on a Friday night. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Response", "sec_num": null }, { "text": "Within the limited scope of this paper, our goal is to specifically investigate the state-of-the-art sarcasm detection system presented by Lee et al. (2020) which reported an F 1 score of 93.1%, 14% higher than the next best result reported to the FigLang 2020 workshop (Ghosh et al., 2020) for the Twitter track -and to distill a novel deep learning component used in their pipeline in order to investigate its contribution to the final result. Through a comprehensive series of experiments, we find that this novel architecture (discussed in Sections 3 and 4) does not lead to any significant improvement. The improvement may thus be attributed to components other than deep learning, such as augmenting the corpus by using additional data. Investigating the other components, however, is not in the scope of the work being presented here.", "cite_spans": [ { "start": 139, "end": 156, "text": "Lee et al. (2020)", "ref_id": "BIBREF12" }, { "start": 270, "end": 290, "text": "(Ghosh et al., 2020)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Exemplar task and data", "sec_num": "2" }, { "text": "The task is to determine if the final response in a thread (i.e., a sequence of Tweets where each post is in response to its previous post) is sarcastic. One such thread is shown in Table 1 . All our experiments are conducted on the Twitter corpus of the FigLang 2020 sarcasm detection task (Ghosh et al., 2020) , which comprises 5, 000 threads in the training set and 1, 800 in the test set. Additional properties of this corpus are shown in Table 2 .", "cite_spans": [ { "start": 291, "end": 311, "text": "(Ghosh et al., 2020)", "ref_id": "BIBREF8" } ], "ref_spans": [ { "start": 182, "end": 189, "text": "Table 1", "ref_id": "TABREF0" }, { "start": 443, "end": 450, "text": "Table 2", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Exemplar task and data", "sec_num": "2" }, { "text": "The architecture we investigate has recently been used in downstream NLP tasks, motivated by its success in computer vision. Its origins, however, can be traced back to NLP research, when Sivic and Zisserman (2003) borrowed from the bag-ofwords approach used in text retrieval. Since then, a significant body of work in computer vision has developed this approach further. The core idea being the treatment of an image as a document, and lowdimensional features 1 extracted from them forming the visual vocabulary, thus enabling a vector representation of each image, subsequently used in classification or ranking tasks. A key advancement came in the form of Vector of Locally Aggregated Descriptors (VLAD), introduced by J\u00e9gou et al. (2010) . In this work, too, low-dimensional features were extracted from images, but K clusters of the features were created, and only the difference of each feature from the cluster center was recorded. Instead of a single N -dimensional feature vector, each image would thus be represented by a K \u00d7 N matrix.", "cite_spans": [ { "start": 188, "end": 214, "text": "Sivic and Zisserman (2003)", "ref_id": "BIBREF19" }, { "start": 723, "end": 742, "text": "J\u00e9gou et al. (2010)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Background", "sec_num": "3" }, { "text": "The non-differentiable hard cluster assignment, however, renders it unsuitable for training a neural network. NetVLAD (Arandjelovic et al., 2016) resolves this by using the softmax function, whose parameters can be learned during training. Since the cluster assignments of a feature are not known prior to training, their approach requires K Ndimensional difference vectors to encode each feature. This increase in the number of parameters impedes model optimization, and may lead to overfitting -drawbacks discussed and subsequently addressed by NeXtVLAD (Lin et al., 2018) by introducing a step prior to the soft cluster assignments. In this step, the input is expanded to \u03bbN size by a fully-connected layer, and then decomposed into G groups of lower-dimensional vectors. Further, a sigmoid function with range [0, 1] is used to assign attention scores to the groups for each vector. The process effectively provides a G \u03bb reduction in the number of parameters, by aggregating lowerdimensional vectors. From a linear algebra perspective, this can be interpreted as representing the data using subspace projections of the original vector. ", "cite_spans": [ { "start": 118, "end": 145, "text": "(Arandjelovic et al., 2016)", "ref_id": "BIBREF0" }, { "start": 556, "end": 574, "text": "(Lin et al., 2018)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Background", "sec_num": "3" }, { "text": "G x K x 1 .. . Context 1 E M E 3 E 2 E 1 . . . T M T 3 T 2 T 1 . . .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Background", "sec_num": "3" }, { "text": "Fully Connected Figure 1 : The architecture for sarcasm detection, where M is the number of tokens from the input text, N is the dimension of the BERT representation, and G is the number of groups into which the input is split after expansion.", "cite_spans": [], "ref_spans": [ { "start": 16, "end": 24, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Thread Representation", "sec_num": null }, { "text": "BERT BiLSTM NeXtVLAD Classifier Input M x N M x N G x K x \u03bbN/G K x \u03bbN/G G x 1 x \u03bbN/G G x K x \u03bbN/G K x \u03bbN/G 1 x K\u03bbN/G 1 x", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Thread Representation", "sec_num": null }, { "text": "For an analogous use of NeXtVLAD in NLP, the token representation vectors take the place of the feature vectors used in computer vision literature. In particular, for sarcasm detection using the FigLang corpus, one entire thread needs to be represented by a K \u00d7N matrix. To achieve this, the context and response Tweets (as shown in Table 1 ) from a single thread are concatenated, with a special [SEP] token separating them. This token is known to BERT, and used in its next sentence prediction task. Here, the token is used to separate different posts within a thread. After concatenation, the pretrained BERT model is used to obtain a vector representation of each token. Then, it is passed through a BiLSTM layer before being fed to the NeXtVLAD component. At this point, NeXtVLAD, as a parametric intelligent pooling and aggregation layer, represents the whole Tweet thread as a K \u00d7N matrix, which is finally flattened and fed to two dense layers with a softmax function to assign the predicted label. This architecture, based on the explanation provided by Lee et al. (2020) , is presented in Figure 1 . Consider M input tokens, each represented by a vector of size N produced by the language model and further tuned by the BiLSTM layer (e.g., N = 1024 for BERT Large ). We denote these tokens by x t , t \u2208 {1, ..., M }. Each x t is expanded to\u1e8b t with shape (1, \u03bbN ) and reshaped tox t with shape (G, 1, \u03bbN G ) . Then, the (1) soft assignment ofx g t to the cluster k, and (2) the attention over groups, are computed as", "cite_spans": [ { "start": 1063, "end": 1080, "text": "Lee et al. (2020)", "ref_id": "BIBREF12" } ], "ref_spans": [ { "start": 333, "end": 340, "text": "Table 1", "ref_id": "TABREF0" }, { "start": 1099, "end": 1107, "text": "Figure 1", "ref_id": null }, { "start": 1404, "end": 1418, "text": "(G, 1, \u03bbN G )", "ref_id": null } ], "eq_spans": [], "section": "Architecture for sarcasm detection", "sec_num": "4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u03b1 gk (\u1e8b t ) = e w T gk\u1e8b t+bgk K s=1 e w T gk\u1e8b t+bgk", "eq_num": "(1)" } ], "section": "Architecture for sarcasm detection", "sec_num": "4" }, { "text": "and \u03b1 g (\u1e8b t ) = \u03c3(w T g\u1e8b t + b g ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Architecture for sarcasm detection", "sec_num": "4" }, { "text": "The locally aggregated feature vectors (i.e., the VLAD vectors) are generated by computing the product of the attention, assignment, and the difference from the cluster center", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Architecture for sarcasm detection", "sec_num": "4" }, { "text": "v g tki = \u03b1 g (\u1e8b t )\u03b1 gk (\u1e8b t )(x g ti \u2212 c ki ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Architecture for sarcasm detection", "sec_num": "4" }, { "text": "Finally, the entire thread is represented by", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Architecture for sarcasm detection", "sec_num": "4" }, { "text": "r ki = t,g v g tki .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Architecture for sarcasm detection", "sec_num": "4" }, { "text": "In the above equations, t, g, k, and i iterate over tokens, groups, clusters, and vector elements respectively, while w and b denote the weight and bias parameters of the linear transformations in the fully-connected layers.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Architecture for sarcasm detection", "sec_num": "4" }, { "text": "We delve into several modifications of the model, as well as various hyperparameter settings, in order to investigate how much effect the NeXtVLAD component has on the sarcasm detection task. Our experiments initially use the same training configuration as Lee et al. (2020) , before exploring further. Since Lee et al. (2020) employ additional unpublished data, an exact reproduction of the experiments is not possible. Moreover, the partition of the corpus into training and validation set is left unspecified. Thus, their results reported on the validation set are not truly comparable. Some hyperparameter settings, like the number of epochs for training, are also omitted from their report. However, the primary aim of this work is not to focus on reproduction of the results, but to determine what role the NeXtVLAD component played in the excellent final F 1 score of 93.1%.", "cite_spans": [ { "start": 257, "end": 274, "text": "Lee et al. (2020)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Experiments and Results", "sec_num": "5" }, { "text": "Test set results The performance of different configurations are shown in Table 3 . Our results are shown for the original FigLang test set as well as the one-fifth validation set we separated from training 2 . All the models have been trained for 8 epochs with a batch size of 4. We train the models for different number of epochs ranging from 3 to 30. Lee et al. (2020) mention the use of early stopping for their number of training epochs, which aims to prevent overfitting by monitoring the model performance on a held-out set at the end of each epoch, and stopping the training when performance starts to degrade. Their work, however, leaves out two hyperparameter values required for replication: patience, which controls the number of consecutive times it is acceptable for a model to not improve, and delta, the minimum threshold for differential improvement.", "cite_spans": [ { "start": 354, "end": 371, "text": "Lee et al. (2020)", "ref_id": "BIBREF12" } ], "ref_spans": [ { "start": 74, "end": 81, "text": "Table 3", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Validation set results", "sec_num": null }, { "text": "Without these, we follow Fomin et al. (2020) and apply early stopping with patience and delta set to 2 and 0, respectively. With early stopping, the number of optimal epochs varied, but even while setting the random states manually to make the configuration as deterministic as possible, repeated experiments showed optimal training to always vary between 5 to 12 epochs (a subset of the more comprehensive experiments we conducted, checking from 3 to 30 epochs). In our experiments, the BERT large-cased + BiLSTM + NeXtVLAD model is identical to Lee et al. (2020) (without their data augmentation and modification). The hyperparam-eters for this model are provided in Table 4 . Since this model achieves the best F 1 score on the validation set with 8 training epochs, we fix the number of training epochs to be 8 for the other models as well.", "cite_spans": [], "ref_spans": [ { "start": 669, "end": 676, "text": "Table 4", "ref_id": "TABREF7" } ], "eq_spans": [], "section": "Model", "sec_num": null }, { "text": "In order to replicate the ensemble model discussed by Lee et al. (2020) , threads with more than one context are used to create extra samples by removing the furthest context, one at a time, until only one context remains. In the experiments using this data expansion (DE), the thread in Table 1 , for instance, gives rise to one additional sample, with only context 2 and the response. Then, a separate model is trained for each context length, and majority voting assigns the final label. We also conduct a series of experiments where the response Tweet is removed from each thread, and the remaining thread is considered non-sarcastic. These are indicated in Table 3 by LA (label augmentation).", "cite_spans": [ { "start": 54, "end": 71, "text": "Lee et al. (2020)", "ref_id": "BIBREF12" } ], "ref_spans": [ { "start": 288, "end": 296, "text": "Table 1", "ref_id": "TABREF0" }, { "start": 663, "end": 670, "text": "Table 3", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Model", "sec_num": null }, { "text": "To explore further, we record the performance for all training epochs on the validation set. Table 5 shows the accuracy for epochs 2 to 8, for the model proposed by Lee et al. (2020) (the first configuration in Table 3 ). We compute the accuracy and F 1 score for up to 30 training epochs. A comparison of the best scores from the models that employ NeXtVLAD with the ones that do not, we find no statistically significant improvement.", "cite_spans": [ { "start": 165, "end": 182, "text": "Lee et al. (2020)", "ref_id": "BIBREF12" } ], "ref_spans": [ { "start": 93, "end": 100, "text": "Table 5", "ref_id": null }, { "start": 211, "end": 218, "text": "Table 3", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Model", "sec_num": null }, { "text": "We also include additional experiments that replace BiLSTM with convolution layers. We use KimCNN (Kim, 2014) as well as a custom CNN (simply called OurCNN in that always cover one response token with various number of context tokens. Appendix A provides a discussion of our custom CNN. These variations, too, however, do not outperform the baseline results obtained through BERT alone.", "cite_spans": [ { "start": 98, "end": 109, "text": "(Kim, 2014)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": null }, { "text": "In image/video processing, a large number of lowdimensional descriptors extracted from the original high-dimensional image (such as SIFT vectors of size 128) are fed to NeXtVLAD. In NLP applications, however, the token vectors have a much higher dimension. It is possible that this is why the subspace representation does not provide any advantage over the original vector representation. Another possibility is that unlike images or videos, sub-vector representations of tokens do not form meaningful units in natural language tasks, and thus, the low-dimensional split actually hurts the learner. Our experiments also show that the use of domain-specific models like CTBERT (M\u00fcller et al., 2020) offer comparable performance, but reach their best results in fewer epochs of training. We feel that it is important to distinguish the components of a complex NLP pipeline that contribute to improvements in downstream tasks, from other components in the pipeline. While stopping short of providing explainability to a deep learning system, this type of investigation can, at the very least, provide attribution to specific components of NLP pipelines. In other words, it can help us identify which parts of a pipeline are primarily responsible for improvements in a downstream task.", "cite_spans": [ { "start": 676, "end": 697, "text": "(M\u00fcller et al., 2020)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "5.1" }, { "text": "Such attributions can help us build comparable systems that are significantly less resourceintensive. In our experiments, we were able to train models based on the BERT Large architecture with a 2-layer fully-connected classification head Table 5 : The validation set accuracy for training epochs 2 to 8 of the first model configuration from Table 3 (the first and second rows from Table 3) .", "cite_spans": [], "ref_spans": [ { "start": 239, "end": 246, "text": "Table 5", "ref_id": null }, { "start": 342, "end": 349, "text": "Table 3", "ref_id": "TABREF5" }, { "start": 382, "end": 390, "text": "Table 3)", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Discussion", "sec_num": "5.1" }, { "text": "with a batch size of 2 and sequence length of 512 on a single 12 GB GPU (NVidia GeForce GTX Titan X). But, with the addition of BiLSTM and NeXtVLAD, the same configuration was only able to fit a batch size of 1. For all the model configurations discussed in this paper, BERT Large-Cased + BiLSTM + NeXtVLAD required two 24 GB GPUs (Nvidia RTX 3090) to fit a batch size of 4.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "5.1" }, { "text": "We investigate the extent to which NeXtVLAD contributes to improved results in a recent sarcasm detection task, and find that it offers little in terms of additional benefits. Our conjecture at this point is, thus, that the 14% improvement achieved by Lee et al. (2020) must entirely be due to the natural language augmentation techniques used. Our work also indicates that local aggregators like NeXtVLAD are unlikely to offer significant benefits to tasks related to figurative language identification, but more empirical work is needed to confirm this hypothesis. We hope that our insights can help future research in this direction by making it easier to channel their efforts into aspects of a pipeline that have tangible and attributable benefits to the final downstream NLP task.", "cite_spans": [ { "start": 252, "end": 269, "text": "Lee et al. (2020)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "The literature on image processing often uses the term \"descriptor\", but to stay in tune with the terminology in NLP research, we continue to use the term \"feature\".", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Our code and the choice of validation set are available at github.com/sinamps/nextvlad-for-nlp", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This work was supported in part by funds from U.S. National Science Foundation (NSF) under award number CNS 2027750, CNS 1822118, and SES 1834597, and from NIST, Statnett, Cyber Risk Research, AMI, and ARL.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": null }, { "text": "To reduce the differences in the shape (i.e., dimensions) and quantity of features fed to NeXtVLAD in Computer Vision and NLP, we designed a custom Convolutional Neural Network (CNN) to transform features into probably a more suitable space. In this section, we present the details of this custom CNN for extracting features for the NeXtVLAD layer. Figure 2 depicts the architecture of our CNN. First, we concatenate all the context Tweets and pass them to BERT to get the token representations and store them in a M \u00d7 N matrix. The response Tweet also goes through the same process and is represented in a M \u00d7 N matrix. N is the dimension of the token representation vectors and M and M denote the number of tokens in the contexts and response respectively. Each row in these matrices contains the vector representation of one token. Similar to KimCNN (Kim, 2014) , we set the width of the kernel to the dimension of the token representation vector (N). But, distinct from KimCNN, our kernels are always applied to local areas from two distinct input matrices.In our architecture, kernels only slide vertically to move over different tokens. To demonstrate, consider the kernel of size 3 in Figure 2 . The first two rows of this kernel cover the first two tokens of the context matrix and the last row covers the first token in the response matrix. The inner product is computed and yields the first element in the first output vector. Then, the blue portion of the kernel slides downward and the computation repeats to yield the second element of the first output vector. When this sliding window reaches the end of the context matrix, the first output vector is complete. Now, the gray portion of the kernel slides down-ward on the response matrix and all previous steps repeat to generate the next output vector. This set of operations with F different kernels and by applying appropriate zero padding to the input, yields an output of shape (F, M , M ) which is (64, 100, 512) in our implementation. This output is rearranged and reshaped to shape (M \u00d7 M, F ), which is much more similar to image/video features in shape and quantity. This is fed to NeXtVLAD in our sarcasm detection architecture. We use 64 kernels in our experiments with size 2, 3, 4, and 5 (16 kernels of each size; size only refers to the height of the kernel, since the width is fixed). In our implementation, the values are set as F = 64, M = 512, and M = 100.", "cite_spans": [ { "start": 853, "end": 864, "text": "(Kim, 2014)", "ref_id": "BIBREF10" } ], "ref_spans": [ { "start": 349, "end": 357, "text": "Figure 2", "ref_id": null }, { "start": 1192, "end": 1200, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "A Appendix", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "NetVLAD: CNN architecture for weakly supervised place recognition", "authors": [ { "first": "Relja", "middle": [], "last": "Arandjelovic", "suffix": "" }, { "first": "Petr", "middle": [], "last": "Gronat", "suffix": "" }, { "first": "Akihiko", "middle": [], "last": "Torii", "suffix": "" }, { "first": "Tomas", "middle": [], "last": "Pajdla", "suffix": "" }, { "first": "Josef", "middle": [], "last": "Sivic", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition", "volume": "", "issue": "", "pages": "5297--5307", "other_ids": {}, "num": null, "urls": [], "raw_text": "Relja Arandjelovic, Petr Gronat, Akihiko Torii, Tomas Pajdla, and Josef Sivic. 2016. NetVLAD: CNN ar- chitecture for weakly supervised place recognition. In Proceedings of the IEEE Conference on Com- puter Vision and Pattern Recognition, pages 5297- 5307. Institute of Electrical and Electronics Engi- neers.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "On the Role of Text Preprocessing in Neural Network Architectures: An Evaluation Study on Text Categorization and Sentiment Analysis", "authors": [ { "first": "Jose", "middle": [], "last": "Camacho", "suffix": "" }, { "first": "-Collados", "middle": [], "last": "", "suffix": "" }, { "first": "Mohammad", "middle": [], "last": "Taher Pilehvar", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP", "volume": "", "issue": "", "pages": "40--46", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jose Camacho-Collados and Mohammad Taher Pile- hvar. 2018. On the Role of Text Preprocessing in Neural Network Architectures: An Evaluation Study on Text Categorization and Sentiment Analysis. In Proceedings of the Workshop BlackboxNLP: Ana- lyzing and Interpreting Neural Networks for NLP, pages 40-46. Association for Computational Lin- guistics.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Sarcasm, Pretense, and The Semantics/Pragmatics Distinction", "authors": [ { "first": "Elisabeth", "middle": [], "last": "Camp", "suffix": "" } ], "year": 2012, "venue": "No\u00fbs", "volume": "46", "issue": "4", "pages": "587--634", "other_ids": {}, "num": null, "urls": [], "raw_text": "Elisabeth Camp. 2012. Sarcasm, Pretense, and The Se- mantics/Pragmatics Distinction. No\u00fbs, 46(4):587- 634.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Questionable Answers in Question Answering Research: Reproducibility and Variability of Published Results", "authors": [ { "first": "Matt", "middle": [], "last": "Crane", "suffix": "" } ], "year": 2018, "venue": "Transactions of the Association for Computational Linguistics", "volume": "6", "issue": "", "pages": "241--252", "other_ids": {}, "num": null, "urls": [], "raw_text": "Matt Crane. 2018. Questionable Answers in Question Answering Research: Reproducibility and Variabil- ity of Published Results. Transactions of the Associ- ation for Computational Linguistics, 6:241-252.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Seven Types of Ambiguity", "authors": [ { "first": "William", "middle": [], "last": "Empson", "suffix": "" } ], "year": 1947, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "William Empson. 1947. Seven Types of Ambiguity, 2nd edition. Chatto and Windus, London.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "The Impact of applying Different Preprocessing Steps on Review Spam Detection", "authors": [ { "first": "Wael", "middle": [], "last": "Etaiwi", "suffix": "" }, { "first": "Ghazi", "middle": [], "last": "Naymat", "suffix": "" } ], "year": 2017, "venue": "Procedia Computer Science", "volume": "113", "issue": "", "pages": "273--279", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wael Etaiwi and Ghazi Naymat. 2017. The Impact of applying Different Preprocessing Steps on Re- view Spam Detection. Procedia Computer Science, 113:273-279.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "High-level library to help with training neural networks in PyTorch", "authors": [ { "first": "V", "middle": [], "last": "Fomin", "suffix": "" }, { "first": "J", "middle": [], "last": "Anmol", "suffix": "" }, { "first": "S", "middle": [], "last": "Desroziers", "suffix": "" }, { "first": "J", "middle": [], "last": "Kriss", "suffix": "" }, { "first": "A", "middle": [], "last": "Tejani", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "V. Fomin, J. Anmol, S. Desroziers, J. Kriss, and A. Te- jani. 2020. High-level library to help with training neural networks in PyTorch. https://github. com/pytorch/ignite.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Fracking Sarcasm using Neural Network", "authors": [ { "first": "Aniruddha", "middle": [], "last": "Ghosh", "suffix": "" }, { "first": "Tony", "middle": [], "last": "Veale", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis", "volume": "", "issue": "", "pages": "161--169", "other_ids": {}, "num": null, "urls": [], "raw_text": "Aniruddha Ghosh and Tony Veale. 2016. Fracking Sarcasm using Neural Network. In Proceedings of the Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis, pages 161-169. Association for Computational Lin- guistics.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "A Report on the 2020 Sarcasm Detection Shared Task", "authors": [ { "first": "Debanjan", "middle": [], "last": "Ghosh", "suffix": "" }, { "first": "Avijit", "middle": [], "last": "Vajpayee", "suffix": "" }, { "first": "Smaranda", "middle": [], "last": "Muresan", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the Workshop on Figurative Language Processing", "volume": "", "issue": "", "pages": "1--11", "other_ids": { "DOI": [ "10.18653/v1/2020.figlang-1.1" ] }, "num": null, "urls": [], "raw_text": "Debanjan Ghosh, Avijit Vajpayee, and Smaranda Mure- san. 2020. A Report on the 2020 Sarcasm Detection Shared Task. In Proceedings of the Workshop on Figurative Language Processing, pages 1-11. Asso- ciation for Computational Linguistics.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Aggregating Local Descriptors into a Compact Image Representation", "authors": [ { "first": "Herv\u00e9", "middle": [], "last": "J\u00e9gou", "suffix": "" }, { "first": "Matthijs", "middle": [], "last": "Douze", "suffix": "" }, { "first": "Cordelia", "middle": [], "last": "Schmid", "suffix": "" }, { "first": "Patrick", "middle": [], "last": "P\u00e9rez", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition", "volume": "", "issue": "", "pages": "3304--3311", "other_ids": {}, "num": null, "urls": [], "raw_text": "Herv\u00e9 J\u00e9gou, Matthijs Douze, Cordelia Schmid, and Patrick P\u00e9rez. 2010. Aggregating Local Descrip- tors into a Compact Image Representation. In Pro- ceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pages 3304-3311. Institute of Electrical and Electronics Engineers.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Convolutional Neural Networks for Sentence Classification", "authors": [ { "first": "Yoon", "middle": [], "last": "Kim", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1746--1751", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yoon Kim. 2014. Convolutional Neural Networks for Sentence Classification. In Proceedings of the Con- ference on Empirical Methods in Natural Language Processing, pages 1746-1751. Association for Com- putational Linguistics.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Understanding and Quantifying Creativity in Lexical Composition", "authors": [ { "first": "Polina", "middle": [], "last": "Kuznetsova", "suffix": "" }, { "first": "Jianfu", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Yejin", "middle": [], "last": "Choi", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1246--1258", "other_ids": {}, "num": null, "urls": [], "raw_text": "Polina Kuznetsova, Jianfu Chen, and Yejin Choi. 2013. Understanding and Quantifying Creativity in Lexi- cal Composition. In Proceedings of the Conference on Empirical Methods in Natural Language Pro- cessing, pages 1246-1258. Association for Compu- tational Linguistics.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Augmenting Data for Sarcasm Detection with Unlabeled Conversation Context", "authors": [ { "first": "Hankyol", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Youngjae", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Gunhee", "middle": [], "last": "Kim", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the Workshop on Figurative Language Processing", "volume": "", "issue": "", "pages": "12--17", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hankyol Lee, Youngjae Yu, and Gunhee Kim. 2020. Augmenting Data for Sarcasm Detection with Un- labeled Conversation Context. In Proceedings of the Workshop on Figurative Language Processing, pages 12-17. Association for Computational Lin- guistics.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "NeXtVLAD: An Efficient Neural Network to Aggregate Frame-level Features for Large-scale Video Classification", "authors": [ { "first": "Rongcheng", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Jing", "middle": [], "last": "Xiao", "suffix": "" }, { "first": "Jianping", "middle": [], "last": "Fan", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the European Conference on Computer Vision Workshops", "volume": "", "issue": "", "pages": "206--218", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rongcheng Lin, Jing Xiao, and Jianping Fan. 2018. NeXtVLAD: An Efficient Neural Network to Ag- gregate Frame-level Features for Large-scale Video Classification. In Proceedings of the European Con- ference on Computer Vision Workshops, pages 206- 218. Springer.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "COVID-Twitter-BERT: A Natural Language Processing Model to Analyse COVID-19 Content on Twitter", "authors": [ { "first": "Martin", "middle": [], "last": "M\u00fcller", "suffix": "" }, { "first": "Marcel", "middle": [], "last": "Salath\u00e9", "suffix": "" }, { "first": "E", "middle": [], "last": "Per", "suffix": "" }, { "first": "", "middle": [], "last": "Kummervold", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2005.07503" ] }, "num": null, "urls": [], "raw_text": "Martin M\u00fcller, Marcel Salath\u00e9, and Per E Kummervold. 2020. COVID-Twitter-BERT: A Natural Language Processing Model to Analyse COVID-19 Content on Twitter. arXiv preprint arXiv:2005.07503.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Large-Scale Assessment of the Effect of Popularity on the Reliability of Research", "authors": [ { "first": "Thomas", "middle": [], "last": "Pfeiffer", "suffix": "" }, { "first": "Robert", "middle": [], "last": "Hoffmann", "suffix": "" } ], "year": 2009, "venue": "PLoS One", "volume": "4", "issue": "6", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thomas Pfeiffer and Robert Hoffmann. 2009. Large- Scale Assessment of the Effect of Popularity on the Reliability of Research. PLoS One, 4(6):e5996.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Optimal Hyperparameters for Deep LSTM-Networks for Sequence Labeling Tasks", "authors": [ { "first": "Nils", "middle": [], "last": "Reimers", "suffix": "" }, { "first": "Iryna", "middle": [], "last": "Gurevych", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1707.06799" ] }, "num": null, "urls": [], "raw_text": "Nils Reimers and Iryna Gurevych. 2017. Opti- mal Hyperparameters for Deep LSTM-Networks for Sequence Labeling Tasks. arXiv preprint arXiv:1707.06799.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "A multidimensional approach for detecting irony in Twitter", "authors": [ { "first": "Antonio", "middle": [], "last": "Reyes", "suffix": "" }, { "first": "Paolo", "middle": [], "last": "Rosso", "suffix": "" }, { "first": "Tony", "middle": [], "last": "Veale", "suffix": "" } ], "year": 2013, "venue": "Language Resources and Evaluation", "volume": "47", "issue": "", "pages": "239--268", "other_ids": { "DOI": [ "https://link.springer.com/article/10.1007/s10579-012-9196-x" ] }, "num": null, "urls": [], "raw_text": "Antonio Reyes, Paolo Rosso, and Tony Veale. 2013. A multidimensional approach for detecting irony in Twitter. In Language Resources and Evaluation, volume 47, pages 239-268. Springer Nature.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Sarcasm as Contrast between a Positive Sentiment and Negative Situation", "authors": [ { "first": "Ellen", "middle": [], "last": "Riloff", "suffix": "" }, { "first": "Ashequl", "middle": [], "last": "Qadir", "suffix": "" }, { "first": "Prafulla", "middle": [], "last": "Surve", "suffix": "" }, { "first": "Lalindra De", "middle": [], "last": "Silva", "suffix": "" }, { "first": "Nathan", "middle": [], "last": "Gilbert", "suffix": "" }, { "first": "Ruihong", "middle": [], "last": "Huang", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "704--714", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ellen Riloff, Ashequl Qadir, Prafulla Surve, Lalindra De Silva, Nathan Gilbert, and Ruihong Huang. 2013. Sarcasm as Contrast between a Positive Sentiment and Negative Situation. In Proceedings of the Con- ference on Empirical Methods in Natural Language Processing, pages 704-714. Association for Compu- tational Linguistics.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Video Google: A Text Retrieval Approach to Object Matching in Videos", "authors": [ { "first": "Josef", "middle": [], "last": "Sivic", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Zisserman", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the IEEE International Conference on Computer Vision", "volume": "", "issue": "", "pages": "1470--1477", "other_ids": {}, "num": null, "urls": [], "raw_text": "Josef Sivic and Andrew Zisserman. 2003. Video Google: A Text Retrieval Approach to Object Matching in Videos. In Proceedings of the IEEE International Conference on Computer Vision, vol- ume 2, pages 1470-1477. Institute of Electrical and Electronics Engineers.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Creative Language Retrieval: A Robust Hybrid of Information Retrieval and Linguistic Creativity", "authors": [ { "first": "Tony", "middle": [], "last": "Veale", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the Annual Meeting of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "278--287", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tony Veale. 2011. Creative Language Retrieval: A Ro- bust Hybrid of Information Retrieval and Linguistic Creativity. In Proceedings of the Annual Meeting of the Association for Computational Linguistics: Hu- man Language Technologies, pages 278-287. Asso- ciation for Computational Linguistics.", "links": null } }, "ref_entries": { "TABREF0": { "html": null, "type_str": "table", "text": "", "num": null, "content": "
: A Tweet thread in the FigLang corpus. Sar-
casm being context-dependent, the entire thread serves
as a single sample. The label is based on the final re-
sponse in the thread.
" }, "TABREF2": { "html": null, "type_str": "table", "text": "Overview of the FigLang corpus, showing the overall statistics for the size of individual Tweets (using the BERT tokenizer) and the size of Tweet threads.", "num": null, "content": "" }, "TABREF5": { "html": null, "type_str": "table", "text": "", "num": null, "content": "
" }, "TABREF6": { "html": null, "type_str": "table", "text": ") with filters", "num": null, "content": "
HyperparameterValue
K128
G8
\u03bb (expansion)4
M512
N1024
Context Gating's dropout rate0.5
BiLSTM's dropout rate0.25
# of epochs8
Batch size4
Initial learning rate10 \u22126
" }, "TABREF7": { "html": null, "type_str": "table", "text": "", "num": null, "content": "
: The general hyperparameters for our
implementation of BERT Large-Cased + BiLSTM +
NeXtVLAD.
" } } } }